SOTAVerified

VL-BERT: Pre-training of Generic Visual-Linguistic Representations

2019-08-22ICLR 2020Code Available1· sign in to hype

Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single model on the leaderboard of the VCR benchmark. Code is released at https://github.com/jackroos/VL-BERT.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
VCR (Q-A) devVL-BERTLARGEAccuracy75.5Unverified
VCR (Q-A) devVL-BERTBASEAccuracy73.8Unverified
VCR (Q-AR) devVL-BERTLARGEAccuracy58.9Unverified
VCR (Q-AR) devVL-BERTBASEAccuracy55.2Unverified
VCR (QA-R) devVL-BERTBASEAccuracy74.4Unverified
VCR (QA-R) devVL-BERTLARGEAccuracy77.9Unverified
VCR (Q-AR) testVL-BERTLARGEAccuracy59.7Unverified
VCR (QA-R) testVL-BERTLARGEAccuracy78.4Unverified
VCR (Q-A) testVL-BERTLARGEAccuracy75.8Unverified
VQA v2 test-devVL-BERTLARGEAccuracy71.79Unverified
VQA v2 test-devVL-BERTBASEAccuracy71.16Unverified
VQA v2 test-stdVL-BERTLARGEoverall72.2Unverified

Reproductions