COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning
Simon Ging, Mohammadreza Zolfaghari, Hamed Pirsiavash, Thomas Brox
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/gingsi/coot-videotextOfficialIn paperpytorch★ 291
Abstract
Many real-world video-text tasks involve different levels of granularity, such as frames and words, clip and sentences or videos and paragraphs, each with distinct semantics. In this paper, we propose a Cooperative hierarchical Transformer (COOT) to leverage this hierarchy information and model the interactions between different levels of granularity and different modalities. The method consists of three major components: an attention-aware feature aggregation layer, which leverages the local temporal context (intra-level, e.g., within a clip), a contextual transformer to learn the interactions between low-level and high-level semantics (inter-level, e.g. clip-video, sentence-paragraph), and a cross-modal cycle-consistency loss to connect video and text. The resulting method compares favorably to the state of the art on several benchmarks while having few parameters. All code is available open-source at https://github.com/gingsi/coot-videotext
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ActivityNet Captions | COOT (ae-test split) - Only Appearance features | BLEU4 | 10.85 | — | Unverified |
| YouCook2 | COOT | BLEU-4 | 11.3 | — | Unverified |