SOTAVerified

Contrastive Representations for Label Noise Require Fine-Tuning

2021-08-20Unverified0· sign in to hype

Pierre Nodet, Vincent Lemaire, Alexis Bondu, Antoine Cornuéjols

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper we show that the combination of a Contrastive representation with a label noise-robust classification head requires fine-tuning the representation in order to achieve state-of-the-art performances. Since fine-tuned representations are shown to outperform frozen ones, one can conclude that noise-robust classification heads are indeed able to promote meaningful representations if provided with a suitable starting point. Experiments are conducted to draw a comprehensive picture of performances by featuring six methods and nine noise instances of three different kinds (none, symmetric, and asymmetric). In presence of noise the experiments show that fine tuning of Contrastive representation allows the six methods to achieve better results than end-to-end learning and represent a new reference compare to the recent state of art. Results are also remarkable stable versus the noise level.

Tasks

Reproductions