SOTAVerified

Accurate and Fast Compressed Video Captioning

2023-09-22ICCV 2023Code Available1· sign in to hype

Yaojie Shen, Xin Gu, Kai Xu, Heng Fan, Longyin Wen, Libo Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Existing video captioning approaches typically require to first sample video frames from a decoded video and then conduct a subsequent process (e.g., feature extraction and/or captioning model learning). In this pipeline, manual frame sampling may ignore key information in videos and thus degrade performance. Additionally, redundant information in the sampled frames may result in low efficiency in the inference of video captioning. Addressing this, we study video captioning from a different perspective in compressed domain, which brings multi-fold advantages over the existing pipeline: 1) Compared to raw images from the decoded video, the compressed video, consisting of I-frames, motion vectors and residuals, is highly distinguishable, which allows us to leverage the entire video for learning without manual sampling through a specialized model design; 2) The captioning model is more efficient in inference as smaller and less redundant information is processed. We propose a simple yet effective end-to-end transformer in the compressed domain for video captioning that enables learning from the compressed video for captioning. We show that even with a simple design, our method can achieve state-of-the-art performance on different benchmarks while running almost 2x faster than existing approaches. Code is available at https://github.com/acherstyx/CoCap.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MSR-VTTCoCap (ViT/L14)CIDEr57.2Unverified
MSVDCoCap (ViT/L14)CIDEr121.5Unverified
VATEXCoCap (ViT/L14)BLEU-435.8Unverified

Reproductions