Robust Procedures for Estimating and Testing in the Framework of Divergence Measures

Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 430
Author(s):  
Leandro Pardo ◽  
Nirian Martín

The approach for estimating and testing based on divergence measures has become, in the last 30 years, a very popular technique not only in the field of statistics, but also in other areas, such as machine learning, pattern recognition, etc [...]


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1423
Author(s):  
Javier Bonilla ◽  
Daniel Vélez ◽  
Javier Montero ◽  
J. Tinguaro Rodríguez

In the last two decades, information entropy measures have been relevantly applied in fuzzy clustering problems in order to regularize solutions by avoiding the formation of partitions with excessively overlapping clusters. Following this idea, relative entropy or divergence measures have been similarly applied, particularly to enable that kind of entropy-based regularization to also take into account, as well as interact with, cluster size variables. Particularly, since Rényi divergence generalizes several other divergence measures, its application in fuzzy clustering seems promising for devising more general and potentially more effective methods. However, previous works making use of either Rényi entropy or divergence in fuzzy clustering, respectively, have not considered cluster sizes (thus applying regularization in terms of entropy, not divergence) or employed divergence without a regularization purpose. Then, the main contribution of this work is the introduction of a new regularization term based on Rényi relative entropy between membership degrees and observation ratios per cluster to penalize overlapping solutions in fuzzy clustering analysis. Specifically, such Rényi divergence-based term is added to the variance-based Fuzzy C-means objective function when allowing cluster sizes. This then leads to the development of two new fuzzy clustering methods exhibiting Rényi divergence-based regularization, the second one extending the first by considering a Gaussian kernel metric instead of the Euclidean distance. Iterative expressions for these methods are derived through the explicit application of Lagrange multipliers. An interesting feature of these expressions is that the proposed methods seem to take advantage of a greater amount of information in the updating steps for membership degrees and observations ratios per cluster. Finally, an extensive computational study is presented showing the feasibility and comparatively good performance of the proposed methods.


2001 ◽  
Vol 93 (1-2) ◽  
pp. 1-16 ◽  
Author(s):  
Jan Beirlant ◽  
Luc Devroye ◽  
László Györfi ◽  
Igor Vajda

2010 ◽  
Vol 47 (1) ◽  
pp. 216-234 ◽  
Author(s):  
Filia Vonta ◽  
Alex Karagrigoriou

Measures of divergence or discrepancy are used either to measure mutual information concerning two variables or to construct model selection criteria. In this paper we focus on divergence measures that are based on a class of measures known as Csiszár's divergence measures. In particular, we propose a measure of divergence between residual lives of two items that have both survived up to some time t as well as a measure of divergence between past lives, both based on Csiszár's class of measures. Furthermore, we derive properties of these measures and provide examples based on the Cox model and frailty or transformation model.


Kybernetes ◽  
1995 ◽  
Vol 24 (2) ◽  
pp. 15-28
Author(s):  
L. Pardo ◽  
D. Morales ◽  
I.J. Taneja

2012 ◽  
Vol 8 (1) ◽  
pp. 17-32 ◽  
Author(s):  
K. Jain ◽  
Ram Saraswat

A New Information Inequality and Its Application in Establishing Relation Among Various f-Divergence MeasuresAn Information inequality by using convexity arguments and Jensen inequality is established in terms of Csiszar f-divergence measures. This inequality is applied in comparing particular divergences which play a fundamental role in Information theory, such as Kullback-Leibler distance, Hellinger discrimination, Chi-square distance, J-divergences and others.


1994 ◽  
Vol 19 (1) ◽  
pp. 23-42 ◽  
Author(s):  
Philip H. Ramsey

A review of the literature has shown that robust procedures for testing variances have become available. The two best procedures are one proposed by O’Brien (1981) and another by Brown and Forsythe (1974). An examination of these procedures for a wide variety of populations confirms their robustness and indicates that optimal power can usually be obtained by applying a test for kurtosis to aid in the decision between these two procedures.


Sign in / Sign up

Export Citation Format

Share Document