SOTAVerified

A Tunable Loss Function for Binary Classification

2019-02-12Unverified0· sign in to hype

Tyler Sypherd, Mario Diaz, Lalitha Sankar, Peter Kairouz

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present -loss, [1,], a tunable loss function for binary classification that bridges log-loss (=1) and 0-1 loss ( = ). We prove that -loss has an equivalent margin-based form and is classification-calibrated, two desirable properties for a good surrogate loss function for the ideal yet intractable 0-1 loss. For logistic regression-based classification, we provide an upper bound on the difference between the empirical and expected risk at the empirical risk minimizers for -loss by exploiting its Lipschitzianity along with recent results on the landscape features of empirical risk functions. Finally, we show that -loss with = 2 performs better than log-loss on MNIST for logistic regression.

Tasks

Reproductions