SOTAVerified

Combining Metric Learning and Attention Heads For Accurate and Efficient Multilabel Image Classification

2022-09-14Code Available1· sign in to hype

Kirill Prokofiev, Vladislav Sovrasov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Multi-label image classification allows predicting a set of labels from a given image. Unlike multiclass classification, where only one label per image is assigned, such a setup is applicable for a broader range of applications. In this work we revisit two popular approaches to multilabel classification: transformer-based heads and labels relations information graph processing branches. Although transformer-based heads are considered to achieve better results than graph-based branches, we argue that with the proper training strategy, graph-based methods can demonstrate just a small accuracy drop, while spending less computational resources on inference. In our training strategy, instead of Asymmetric Loss (ASL), which is the de-facto standard for multilabel classification, we introduce its metric learning modification. In each binary classification sub-problem it operates with L_2 normalized feature vectors coming from a backbone and enforces angles between the normalized representations of positive and negative samples to be as large as possible. This results in providing a better discrimination ability, than binary cross entropy loss does on unnormalized features. With the proposed loss and training strategy, we obtain SOTA results among single modality methods on widespread multilabel classification benchmarks such as MS-COCO, PASCAL-VOC, NUS-Wide and Visual Genome 500. Source code of our method is available as a part of the OpenVINO Training Extensions https://github.com/openvinotoolkit/deep-object-reid/tree/multilabel

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MS-COCOMLD-TResNet-L-AAM[640x640]mAP91.3Unverified
NUS-WIDEMLD-TResNet-L-AAM[448x448]MAP68.3Unverified
PASCAL VOC 2007MLD-TResNetL-AAM (resolution 448, pretrain from OpenImages V6)mAP96.7Unverified

Reproductions