Prismer: A Vision-Language Model with Multi-Task Experts
Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/nvlabs/prismerOfficialIn paperpytorch★ 1,311
- github.com/KastanDay/video-pretrained-transformerpytorch★ 54
Abstract
Recent vision-language models have shown impressive multi-modal generation capabilities. However, typically they require training huge models on massive datasets. As a more scalable alternative, we introduce Prismer, a data- and parameter-efficient vision-language model that leverages an ensemble of task-specific experts. Prismer only requires training of a small number of components, with the majority of network weights inherited from multiple readily-available, pre-trained experts, and kept frozen during training. By leveraging experts from a wide range of domains, we show Prismer can efficiently pool this expert knowledge and adapt it to various vision-language reasoning tasks. In our experiments, we show that Prismer achieves fine-tuned and few-shot learning performance which is competitive with current state-of-the-arts, whilst requiring up to two orders of magnitude less training data. Code is available at https://github.com/NVlabs/prismer.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| COCO Captions | Prismer | BLEU-4 | 40.4 | — | Unverified |
| nocaps entire | Prismer | CIDEr | 110.84 | — | Unverified |
| nocaps val | Prismer | CIDEr | 107.9 | — | Unverified |