SOTAVerified

COSA: Concatenated Sample Pretrained Vision-Language Foundation Model

2023-06-15Code Available1· sign in to hype

Sihan Chen, Xingjian He, Handong Li, Xiaojie Jin, Jiashi Feng, Jing Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Due to the limited scale and quality of video-text training corpus, most vision-language foundation models employ image-text datasets for pretraining and primarily focus on modeling visually semantic representations while disregarding temporal semantic representations and correlations. To address this issue, we propose COSA, a COncatenated SAmple pretrained vision-language foundation model. COSA jointly models visual contents and event-level temporal cues using only image-text corpora. We achieve this by sequentially concatenating multiple image-text pairs as inputs for pretraining. This transformation effectively converts existing image-text corpora into a pseudo long-form video-paragraph corpus, enabling richer scene transformations and explicit event-description correspondence. Extensive experiments demonstrate that COSA consistently improves performance across a broad range of downstream tasks, including long-form/short-form video-text tasks and image-text tasks such as retrieval, captioning, and question answering. Notably, COSA achieves state-of-the-art results on various competitive benchmarks. Code and model are released at https://github.com/TXH-mercury/COSA.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MSR-VTTCOSACIDEr74.7Unverified
MSVDCOSACIDEr178.5Unverified
TVCCOSABLEU-418.8Unverified
VATEXCOSABLEU-443.7Unverified
YouCook2COSABLEU-410.1Unverified

Reproductions