12-in-1: Multi-Task Vision and Language Representation Learning
Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, Stefan Lee
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/vilbert-multi-taskOfficialIn paperpytorch★ 0
- github.com/jialinwu17/tmpimgspytorch★ 0
- github.com/jiasenlu/vilbert_betapytorch★ 0
- github.com/johntiger1/multitask_multimodalpytorch★ 0
- github.com/Cloud-CV/vilbert-multi-tasknone★ 0
Abstract
Much of vision-and-language research focuses on a small but diverse set of independent tasks and supporting datasets often studied in isolation; however, the visually-grounded language understanding skills required for success at these tasks overlap significantly. In this work, we investigate these relationships between vision-and-language tasks by developing a large-scale, multi-task training regime. Our approach culminates in a single model on 12 datasets from four broad categories of task including visual question answering, caption-based image retrieval, grounding referring expressions, and multi-modal verification. Compared to independently trained single-task models, this represents a reduction from approximately 3 billion parameters to 270 million while simultaneously improving performance by 2.05 points on average across tasks. We use our multi-task framework to perform in-depth analysis of the effect of joint training diverse tasks. Further, we show that finetuning task-specific models from our single multi-task model can lead to further improvements, achieving performance at or above the state-of-the-art.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MEAD | 12 | 12k | 12 | — | Unverified |