SOTAVerified

Distribution Alignment: A Unified Framework for Long-tail Visual Recognition

2021-03-30CVPR 2021Code Available1· sign in to hype

Songyang Zhang, Zeming Li, Shipeng Yan, Xuming He, Jian Sun

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Despite the recent success of deep neural networks, it remains challenging to effectively model the long-tail class distribution in visual recognition tasks. To address this problem, we first investigate the performance bottleneck of the two-stage learning framework via ablative study. Motivated by our discovery, we propose a unified distribution alignment strategy for long-tail visual recognition. Specifically, we develop an adaptive calibration function that enables us to adjust the classification scores for each data point. We then introduce a generalized re-weight method in the two-stage learning to balance the class prior, which provides a flexible and unified solution to diverse scenarios in visual recognition tasks. We validate our method by extensive experiments on four tasks, including image classification, semantic segmentation, object detection, and instance segmentation. Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework. The code and models will be made publicly available at: https://github.com/Megvii-BaseDetection/DisAlign

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNet-LTDisAlignTop-1 Accuracy53.4Unverified
iNaturalist 2018DisAlignTop-1 Accuracy70.6Unverified
Places-LTDisAlignTop-1 Accuracy39.3Unverified

Reproductions