SOTAVerified

DUET: Cross-modal Semantic Grounding for Contrastive Zero-shot Learning

2022-07-04Code Available1· sign in to hype

Zhuo Chen, Yufeng Huang, Jiaoyan Chen, Yuxia Geng, Wen Zhang, Yin Fang, Jeff Z. Pan, Huajun Chen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Zero-shot learning (ZSL) aims to predict unseen classes whose samples have never appeared during training. One of the most effective and widely used semantic information for zero-shot image classification are attributes which are annotations for class-level visual characteristics. However, the current methods often fail to discriminate those subtle visual distinctions between images due to not only the shortage of fine-grained annotations, but also the attribute imbalance and co-occurrence. In this paper, we present a transformer-based end-to-end ZSL method named DUET, which integrates latent semantic knowledge from the pre-trained language models (PLMs) via a self-supervised multi-modal learning paradigm. Specifically, we (1) developed a cross-modal semantic grounding network to investigate the model's capability of disentangling semantic attributes from the images; (2) applied an attribute-level contrastive learning strategy to further enhance the model's discrimination on fine-grained visual characteristics against the attribute co-occurrence and imbalance; (3) proposed a multi-task learning policy for considering multi-model objectives. We find that our DUET can achieve state-of-the-art performance on three standard ZSL benchmarks and a knowledge graph equipped ZSL benchmark. Its components are effective and its predictions are interpretable.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AwA2DUET (Ours)average top-1 classification accuracy69.9Unverified
CUB-200-2011DUETaverage top-1 classification accuracy72.3Unverified
SUN AttributeDUET (Ours)average top-1 classification accuracy64.4Unverified

Reproductions