SOTAVerified

Revisiting Document Representations for Large-Scale Zero-Shot Learning

2021-04-21NAACL 2021Code Available0· sign in to hype

Jihyung Kil, Wei-Lun Chao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Zero-shot learning aims to recognize unseen objects using their semantic representations. Most existing works use visual attributes labeled by humans, not suitable for large-scale applications. In this paper, we revisit the use of documents as semantic representations. We argue that documents like Wikipedia pages contain rich visual information, which however can easily be buried by the vast amount of non-visual sentences. To address this issue, we propose a semi-automatic mechanism for visual sentence extraction that leverages the document section headers and the clustering structure of visual sentences. The extracted visual sentences, after a novel weighting scheme to distinguish similar classes, essentially form semantic representations like visual attributes but need much less human effort. On the ImageNet dataset with over 10,000 unseen classes, our representations lead to a 64% relative improvement against the commonly used ones.

Tasks

Reproductions