SOTAVerified

Uncovering Temporal Context for Video Question and Answering

2015-11-15Unverified0· sign in to hype

Linchao Zhu, Zhongwen Xu, Yi Yang, Alexander G. Hauptmann

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this work, we introduce Video Question Answering in temporal domain to infer the past, describe the present and predict the future. We present an encoder-decoder approach using Recurrent Neural Networks to learn temporal structures of videos and introduce a dual-channel ranking loss to answer multiple-choice questions. We explore approaches for finer understanding of video content using question form of "fill-in-the-blank", and managed to collect 109,895 video clips with duration over 1,000 hours from TACoS, MPII-MD, MEDTest 14 datasets, while the corresponding 390,744 questions are generated from annotations. Extensive experiments demonstrate that our approach significantly outperforms the compared baselines.

Tasks

Reproductions