SOTAVerified

Decoupling Representation and Classifier for Long-Tailed Recognition

2019-10-21ICLR 2020Code Available1· sign in to hype

Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, Yannis Kalantidis

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10-LT (ρ=10)LWSError Rate8.9Unverified
CIFAR-10-LT (ρ=10)cRTError Rate9Unverified
ImageNet-LTCB LWSTop-1 Accuracy41.4Unverified
iNaturalist 2018CB-LWSTop-1 Accuracy69.5Unverified
Places-LTCB LWSTop-1 Accuracy37.6Unverified

Reproductions