SOTAVerified

Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

2021-06-27ACL (mmsr, IWCS) 2021Code Available0· sign in to hype

Riko Suzuki, Hitomi Yanaka, Koji Mineshima, Daisuke Bekki

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions. The dataset consists of 200 videos, 5,554 action labels, and 1,942 action triplets of the form <subject, predicate, object> that can be translated into logical semantic representations. The dataset is expected to be useful for evaluating multimodal inference systems between videos and semantically complicated sentences including negation and quantification.

Tasks

Reproductions