Long-tail Learning
Long-tailed learning, one of the most challenging problems in visual recognition, aims to train well-performing models from a large number of images that follow a long-tailed class distribution.
Papers
Showing 1–10 of 131 papers
All datasetsImageNet-LTCIFAR-100-LT (ρ=100)CIFAR-10-LT (ρ=10)iNaturalist 2018CIFAR-100-LT (ρ=10)Places-LTCIFAR-10-LT (ρ=100)CIFAR-100-LT (ρ=50)MIMIC-CXR-LTNIH-CXR-LTCOCO-MLTVOC-MLT
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | LDAM-DRW + SSP | Error Rate | 52.89 | — | Unverified |
| 2 | LDAM-DRW-RSG | Error Rate | 51.5 | — | Unverified |
| 3 | Hybrid-PSC | Error Rate | 51.07 | — | Unverified |
| 4 | CBD+TailCalibX | Error Rate | 49.1 | — | Unverified |
| 5 | MetaSAug-LDAM | Error Rate | 47.73 | — | Unverified |
| 6 | MiSLAS | Error Rate | 47.7 | — | Unverified |
| 7 | GCL | Error Rate | 46.4 | — | Unverified |
| 8 | TADE | Error Rate | 46.1 | — | Unverified |
| 9 | BCL(ResNet-32) | Error Rate | 43.4 | — | Unverified |
| 10 | NCL(ResNet32) | Error Rate | 43.2 | — | Unverified |