
Summary
Objectives: Mutual information is a fundamental concept of information theory that quantifies the expected value of the amount of information that diagnostic testing provides about a patient’s disease state. The purpose of this report is to provide both intuitive and axiomatic descriptions of mutual information and, thereby, promote the use of this statistic as a measure of diagnostic test performance.
Methods: We derive the mathematical expression for mutual information from the intuitive assumption that diagnostic information is the average amount that diagnostic testing reduces our surprise upon ultimately learning a patient’s diagnosis. This concept is formalized by defining “surprise” as the surprisal, a function that quantifies the unlikelihood of an event. Mutual information is also shown to be the only function that conforms to a set of axioms which are reasonable requirements of a measure of diagnostic information. These axioms are related to the axioms of information theory used to derive the expression for entropy.
Results: Both approaches to defining mutual information lead to the known relationship that mutual information is equal to the pre-test uncertainty of the disease state minus the expected value of the posttest uncertainty of the disease state. Mutual information also has the property of being additive when a test provides information about independent health problems.
Conclusion: Mutual information is the best single measure of the ability of a diagnostic test to discriminate among the possible disease states.
Keywords
Diagnostic tests - information theory - mutual information