Video-Text Retrieval by Supervised Sparse Multi-Grained Learning
Yimu Wang, Peng Shi
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/yimuwangcs/Better_Cross_Modal_RetrievalOfficialIn paperpytorch★ 6
Abstract
While recent progress in video-text retrieval has been advanced by the exploration of better representation learning, in this paper, we present a novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse space shared between the video and the text for video-text retrieval. The shared sparse space is initialized with a finite number of sparse concepts, each of which refers to a number of words. With the text data at hand, we learn and update the shared sparse space in a supervised manner using the proposed similarity and alignment losses. Moreover, to enable multi-grained alignment, we incorporate frame representations for better modeling the video modality and calculating fine-grained and coarse-grained similarities. Benefiting from the learned shared sparse space and multi-grained similarities, extensive experiments on several video-text retrieval benchmarks demonstrate the superiority of S3MA over existing methods. Our code is available at https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MSR-VTT-1kA | SuMA (ViT-B/16) | text-to-video R@1 | 49.8 | — | Unverified |