SOTAVerified

MDMMT: Multidomain Multimodal Transformer for Video Retrieval

2021-03-19Code Available1· sign in to hype

Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, Aleksandr Petiushko

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a new state-of-the-art on the text to video retrieval task on MSRVTT and LSMDC benchmarks where our model outperforms all previous solutions by a large margin. Moreover, state-of-the-art results are achieved with a single model on two datasets without finetuning. This multidomain generalisation is achieved by a proper combination of different video caption datasets. We show that training on different datasets can improve test results of each other. Additionally we check intersection between many popular datasets and found that MSRVTT has a significant overlap between the test and the train parts, and the same situation is observed for ActivityNet.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LSMDCMDMMTtext-to-video R@118.8Unverified
MSR-VTTMDMMTtext-to-video R@123.1Unverified
MSR-VTT-1kAMDMMTtext-to-video R@138.9Unverified

Reproductions