Learning to Estimate Epistemic Uncertainty in Neural Networks
Katherine Elizabeth Brown, Doug Talbert
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Epistemic uncertainty quantification provides useful insight into a deep neural network's understanding of the relationship between its training distribution and unseen instances. A Bayesian-based approaches have been shown to quantify this relationship better than softmax probabilities. Unfortunately, however, those approaches to uncertainty quantification require multiple Monte-Carlo samples of a neural network, augmenting the neural network to learn distributions for its weights, or utilizing an ensemble of neural networks. Such extra calculations are problematic in time-critical, resource-limited scenarios such as trauma triage. In this work, we propose a technique that allows epistemic uncertainty to be estimated using learned regression algorithms. We find that this technique, once trained, allows epistemic uncertainty to be effectively and efficiently predicted.