SOTAVerified

Movie101v2: Improved Movie Narration Benchmark

2024-04-20Code Available2· sign in to hype

Zihao Yue, Yepeng Zhang, Ziheng Wang, Qin Jin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Automatic movie narration aims to generate video-aligned plot descriptions to assist visually impaired audiences. Unlike standard video captioning, it involves not only describing key visual details but also inferring plots that unfold across multiple movie shots, presenting distinct and complex challenges. To advance this field, we introduce Movie101v2, a large-scale, bilingual dataset with enhanced data quality specifically designed for movie narration. Revisiting the task, we propose breaking down the ultimate goal of automatic movie narration into three progressive stages, offering a clear roadmap with corresponding evaluation metrics. Based on our new benchmark, we baseline a range of large vision-language models, including GPT-4V, and conduct an in-depth analysis of the challenges in narration generation. Our findings highlight that achieving applicable movie narration generation is a fascinating goal that requires significant research.

Tasks

Reproductions