SOTAVerified

Retraining with Predicted Hard Labels Provably Increases Model Accuracy

2024-06-17Unverified0· sign in to hype

Rudrajit Das, Inderjit S. Dhillon, Alessandro Epasto, Adel Javanmard, Jieming Mao, Vahab Mirrokni, Sujay Sanghavi, Peilin Zhong

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The performance of a model trained with noisy labels is often improved by simply retraining the model with its own predicted hard labels (i.e., 1/0 labels). Yet, a detailed theoretical characterization of this phenomenon is lacking. In this paper, we theoretically analyze retraining in a linearly separable binary classification setting with randomly corrupted labels given to us and prove that retraining can improve the population accuracy obtained by initially training with the given (noisy) labels. To the best of our knowledge, this is the first such theoretical result. Retraining finds application in improving training with local label differential privacy (DP) which involves training with noisy labels. We empirically show that retraining selectively on the samples for which the predicted label matches the given label significantly improves label DP training at no extra privacy cost; we call this consensus-based retraining. As an example, when training ResNet-18 on CIFAR-100 with =3 label DP, we obtain more than 6% improvement in accuracy with consensus-based retraining.

Tasks

Reproductions