SOTAVerified

Dynamic Loss For Robust Learning

2022-11-22Code Available0· sign in to hype

Shenwang Jiang, Jianan Li, Jizhou Zhang, Ying Wang, Tingfa Xu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Label noise and class imbalance commonly coexist in real-world data. Previous works for robust learning, however, usually address either one type of the data biases and underperform when facing them both. To mitigate this gap, this work presents a novel meta-learning based dynamic loss that automatically adjusts the objective functions with the training process to robustly learn a classifier from long-tailed noisy data. Concretely, our dynamic loss comprises a label corrector and a margin generator, which respectively correct noisy labels and generate additive per-class classification margins by perceiving the underlying data distribution as well as the learning state of the classifier. Equipped with a new hierarchical sampling strategy that enriches a small amount of unbiased metadata with diverse and hard samples, the two components in the dynamic loss are optimized jointly through meta-learning and cultivate the classifier to well adapt to clean and balanced test data. Extensive experiments show our method achieves state-of-the-art accuracy on multiple real-world and synthetic datasets with various types of data biases, including CIFAR-10/100, Animal-10N, ImageNet-LT, and Webvision. Code will soon be publicly available.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
mini WebVision 1.0Dynamic Loss (Inception-ResNet-v2)Top-1 Accuracy80.12Unverified

Reproductions