SOTAVerified

A Tunable Loss Function for Robust Classification: Calibration, Landscape, and Generalization

2019-06-05Code Available0· sign in to hype

Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a tunable loss function called -loss, parameterized by (0,], which interpolates between the exponential loss ( = 1/2), the log-loss ( = 1), and the 0-1 loss ( = ), for the machine learning setting of classification. Theoretically, we illustrate a fundamental connection between -loss and Arimoto conditional entropy, verify the classification-calibration of -loss in order to demonstrate asymptotic optimality via Rademacher complexity generalization techniques, and build-upon a notion called strictly local quasi-convexity in order to quantitatively characterize the optimization landscape of -loss. Practically, we perform class imbalance, robustness, and classification experiments on benchmark image datasets using convolutional-neural-networks. Our main practical conclusion is that certain tasks may benefit from tuning -loss away from log-loss ( = 1), and to this end we provide simple heuristics for the practitioner. In particular, navigating the hyperparameter can readily provide superior model robustness to label flips ( > 1) and sensitivity to imbalanced classes ( < 1).

Tasks

Reproductions