SOTAVerified

Bounds on mutual information of mixture data for classification tasks

2021-01-27Unverified0· sign in to hype

Yijun Ding, Amit Ashok

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The data for many classification problems, such as pattern and speech recognition, follow mixture distributions. To quantify the optimum performance for classification tasks, the Shannon mutual information is a natural information-theoretic metric, as it is directly related to the probability of error. The mutual information between mixture data and the class label does not have an analytical expression, nor any efficient computational algorithms. We introduce a variational upper bound, a lower bound, and three estimators, all employing pair-wise divergences between mixture components. We compare the new bounds and estimators with Monte Carlo stochastic sampling and bounds derived from entropy bounds. To conclude, we evaluate the performance of the bounds and estimators through numerical simulations.

Tasks

Reproductions