EXACT: How to Train Your Accuracy
2022-05-19Code Available1· sign in to hype
Ivan Karpukhin, Stanislav Dereka, Sergey Kolesnikov
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/tinkoff-ai/exactOfficialIn paperpytorch★ 10
- github.com/ivan-chai/exactOfficialIn paperpytorch★ 2
Abstract
Classification tasks are usually evaluated in terms of accuracy. However, accuracy is discontinuous and cannot be directly optimized using gradient ascent. Popular methods minimize cross-entropy, hinge loss, or other surrogate losses, which can lead to suboptimal results. In this paper, we propose a new optimization framework by introducing stochasticity to a model's output and optimizing expected accuracy, i.e. accuracy of the stochastic model. Extensive experiments on linear models and deep image classification show that the proposed optimization method is a powerful alternative to widely used classification losses.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CIFAR-10 | EXACT (WRN-28-10) | Percentage correct | 96.73 | — | Unverified |
| CIFAR-100 | EXACT (WRN-28-10) | Percentage correct | 82.68 | — | Unverified |
| MNIST | EXACT (M3-CNN) | Percentage error | 0.33 | — | Unverified |
| SVHN | EXACT (WRN-16-8) | Percentage error | 2.21 | — | Unverified |