SOTAVerified

MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One More Step Towards Generalization

2022-03-14Unverified0· sign in to hype

Alexander Kunitsyn, Maksim Kalashnikov, Maksim Dzabraev, Andrei Ivaniuta

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this work we present a new State-of-The-Art on the text-to-video retrieval task on MSR-VTT, LSMDC, MSVD, YouCook2 and TGIF obtained by a single model. Three different data sources are combined: weakly-supervised videos, crowd-labeled text-image pairs and text-video pairs. A careful analysis of available pre-trained networks helps to choose the best prior-knowledge ones. We introduce three-stage training procedure that provides high transfer knowledge efficiency and allows to use noisy datasets during training without prior knowledge degradation. Additionally, double positional encoding is used for better fusion of different modalities and a simple method for non-square inputs processing is suggested.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LSMDCMDMMT-2text-to-video R@126.9Unverified
MSR-VTTMDMMT-2text-to-video R@133.7Unverified
MSVDMDMMT-2text-to-video R@156.8Unverified
TGIFMDMMT-2text-to-video R@125.5Unverified
YouCook2MDMMT-2text-to-video R@132Unverified

Reproductions