When in Doubt: Improving Classification Performance with Alternating Normalization
2021-09-28Findings (EMNLP) 2021Code Available1· sign in to hype
Menglin Jia, Austin Reiter, Ser-Nam Lim, Yoav Artzi, Claire Cardie
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/KMnP/canOfficialIn paperpytorch★ 14
Abstract
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification. CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution using the predicted class distributions of high-confidence validation examples. CAN is easily applicable to any probabilistic classifier, with minimal computation overhead. We analyze the properties of CAN using simulated experiments, and empirically demonstrate its effectiveness across a diverse set of classification tasks.