SOTAVerified

VLCap: Vision-Language with Contrastive Learning for Coherent Video Paragraph Captioning

2022-06-26Code Available1· sign in to hype

Kashu Yamazaki, Sang Truong, Khoa Vo, Michael Kidd, Chase Rainwater, Khoa Luu, Ngan Le

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we leverage the human perceiving process, that involves vision and language interaction, to generate a coherent paragraph description of untrimmed videos. We propose vision-language (VL) features consisting of two modalities, i.e., (i) vision modality to capture global visual content of the entire scene and (ii) language modality to extract scene elements description of both human and non-human objects (e.g. animals, vehicles, etc), visual and non-visual elements (e.g. relations, activities, etc). Furthermore, we propose to train our proposed VLCap under a contrastive learning VL loss. The experiments and ablation studies on ActivityNet Captions and YouCookII datasets show that our VLCap outperforms existing SOTA methods on both accuracy and diversity metrics.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ActivityNet CaptionsVLCap (ae-test split) - Appearance + LanguageBLEU413.38Unverified

Reproductions