Mutual Information as an Index of Diagnostic Test Performance

2003 ◽  
Vol 42 (03) ◽  
pp. 260-264 ◽  
Author(s):  
W. A. Benish

Summary Objectives: This paper demonstrates that diagnostic test performance can be quantified as the average amount of information the test result (R) provides about the disease state (D). Methods: A fundamental concept of information theory, mutual information, is directly applicable to this problem. This statistic quantifies the amount of information that one random variable contains about another random variable. Prior to performing a diagnostic test, R and D are random variables. Hence, their mutual information, I(D;R), is the amount of information that R provides about D. Results: I(D;R) is a function of both 1) the pretest probabilities of the disease state and 2) the set of conditional probabilities relating each possible test result to each possible disease state. The area under the receiver operating characteristic curve (AUC) is a popular measure of diagnostic test performance which, in contrast to I(D;R), is independent of the pretest probabilities; it is a function of only the set of conditional probabilities. The AUC is not a measure of diagnostic information. Conclusions: Because I(D;R) is dependent upon pretest probabilities, knowledge of the setting in which a diagnostic test is employed is a necessary condition for quantifying the amount of information it provides. Advantages of I(D;R) over the AUC are that it can be calculated without invoking an arbitrary curve fitting routine, it is applicable to situations in which multiple diagnoses are under consideration, and it quantifies test performance in meaningful units (bits of information).

2009 ◽  
Vol 48 (06) ◽  
pp. 552-557 ◽  
Author(s):  
W. A. Benish

Summary Objectives: Mutual information is a fundamental concept of information theory that quantifies the expected value of the amount of information that diagnostic testing provides about a patient’s disease state. The purpose of this report is to provide both intuitive and axiomatic descriptions of mutual information and, thereby, promote the use of this statistic as a measure of diagnostic test performance. Methods: We derive the mathematical expression for mutual information from the intuitive assumption that diagnostic information is the average amount that diagnostic testing reduces our surprise upon ultimately learning a patient’s diagnosis. This concept is formalized by defining “surprise” as the surprisal, a function that quantifies the unlikelihood of an event. Mutual information is also shown to be the only function that conforms to a set of axioms which are reasonable requirements of a measure of diagnostic information. These axioms are related to the axioms of information theory used to derive the expression for entropy. Results: Both approaches to defining mutual information lead to the known relationship that mutual information is equal to the pre-test uncertainty of the disease state minus the expected value of the posttest uncertainty of the disease state. Mutual information also has the property of being additive when a test provides information about independent health problems. Conclusion: Mutual information is the best single measure of the ability of a diagnostic test to discriminate among the possible disease states.


Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 97 ◽  
Author(s):  
William A. Benish

The fundamental information theory functions of entropy, relative entropy, and mutual information are directly applicable to clinical diagnostic testing. This is a consequence of the fact that an individual’s disease state and diagnostic test result are random variables. In this paper, we review the application of information theory to the quantification of diagnostic uncertainty, diagnostic information, and diagnostic test performance. An advantage of information theory functions over more established test performance measures is that they can be used when multiple disease states are under consideration as well as when the diagnostic test can yield multiple or continuous results. Since more than one diagnostic test is often required to help determine a patient’s disease state, we also discuss the application of the theory to situations in which more than one diagnostic test is used. The total diagnostic information provided by two or more tests can be partitioned into meaningful components.


2003 ◽  
Vol 49 (11) ◽  
pp. 1783-1784 ◽  
Author(s):  
Victor M Montori ◽  
Gordon H Guyatt

2020 ◽  
Vol 203 ◽  
pp. e348
Author(s):  
Miles Mannas* ◽  
Sinan Khadhouri ◽  
Kevin M Gallagher ◽  
Kenneth R Mackenzie ◽  
Taimur T Shah ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document