SOTAVerified

Learning Prototype Classifiers for Long-Tailed Recognition

2023-02-01Code Available0· sign in to hype

Saurabh Sharma, Yongqin Xian, Ning Yu, Ambuj Singh

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The problem of long-tailed recognition (LTR) has received attention in recent years due to the fundamental power-law distribution of objects in the real-world. Most recent works in LTR use softmax classifiers that are biased in that they correlate classifier norm with the amount of training data for a given class. In this work, we show that learning prototype classifiers addresses the biased softmax problem in LTR. Prototype classifiers can deliver promising results simply using Nearest-Class- Mean (NCM), a special case where prototypes are empirical centroids. We go one step further and propose to jointly learn prototypes by using distances to prototypes in representation space as the logit scores for classification. Further, we theoretically analyze the properties of Euclidean distance based prototype classifiers that lead to stable gradient-based optimization which is robust to outliers. To enable independent distance scales along each channel, we enhance Prototype classifiers by learning channel-dependent temperature parameters. Our analysis shows that prototypes learned by Prototype classifiers are better separated than empirical centroids. Results on four LTR benchmarks show that Prototype classifier outperforms or is comparable to state-of-the-art methods. Our code is made available at https://github.com/saurabhsharma1993/prototype-classifier-ltr.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-100-LT (ρ=10)PCError Rate30.88Unverified
CIFAR-100-LT (ρ=100)PCError Rate46.59Unverified
CIFAR-100-LT (ρ=50)PCError Rate42.25Unverified

Reproductions