Bayesian Fisher Information, Shannon Information, and ROC Analysis for Classification Tasks

Author(s):  
Eric Clarkson
Author(s):  
Martin Pokorný

In the area of economical classification tasks, the accuracy maximization is often used to evaluate classifier performance. Accuracy maximization (or error rate minimization) suffers from the assumption of equal false positive and false negative error costs. Furthermore, accuracy is not able to express true classifier performance under skewed class distribution. Due to these limitations, the use of accuracy on real tasks is questionable. In a real binary classification task, the difference between the costs of false positive and false negative error is usually critical. To overcome this issue, the Receiver Ope­rating Characteristic (ROC) method in relation to decision-analytic principles can be used. One essential advantage of this method is the possibility of classifier performance visualization by means of a ROC graph. This paper presents concrete examples of binary classification, where the inadequacy of accuracy as the evaluation metric is shown, and on the same examples the ROC method is applied. From the set of possible classification models, the probabilistic classifier with continuous output is under consideration. Mainly two questions are solved. Firstly, the selection of the best classifier from a set of possible classifiers. For example, accuracy metric rates two classifiers almost equiva­lently (87.7 % and 89.3 %), whereas decision analysis (via costs minimization) or ROC analysis reveal differe­nt performance according to target conditions of unequal error costs of false positives and false negatives. Secondly, the setting of an optimal decision threshold at classifier’s output. For example, accuracy maximization finds the optimal threshold at classifier’s output in value of 0.597, but the optimal threshold respecting higher costs of false negatives is discovered by costs minimization or ROC analysis in a value substantially lower (0.477).


2020 ◽  
Vol 98 (8) ◽  
pp. 784-789 ◽  
Author(s):  
Ibraheem Nasser ◽  
Afaf Abdel-Hady

Fisher information is calculated for the ground state of He-isoelectronic series, in position ([Formula: see text]) space. The results are given and discussed as a function of the nuclear charge (Z) and the screening parameter (λ) in the case study of Yukawa potential. Simple and explicit one-, two-, and three-correlated terms of Hylleraas wave function are used to focus on extracting the most characteristic physical features of the results. The numerical values of Fisher information are given in 1- and 2-electron charge densities, and their ratio of 2- to 1-electron densities results are defined and analyzed. To enable a comparison with others, the Fisher–Shannon information products, which measure the electron–electron correlation strength, are calculated in 1-electron density. The calculations of Fisher information, the ratio, and the Shannon-information products for two-electron systems in the presence of Yukawa potential are carried out for the first time in this study.


Author(s):  
Huangjie Zheng ◽  
Jiangchao Yao ◽  
Ya Zhang ◽  
Ivor W. Tsang ◽  
Jia Wang

In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the Fisher-Shannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as high-resolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the noninformative latent code as occurred in previous works.


2015 ◽  
Vol 32 (7) ◽  
pp. 1288 ◽  
Author(s):  
Eric Clarkson ◽  
Johnathan B. Cushing

2004 ◽  
Author(s):  
Lyle E. Bourne ◽  
Alice F. Healy ◽  
James A. Kole ◽  
William D. Raymond

Sign in / Sign up

Export Citation Format

Share Document