SOTAVerified

MoVieDrive: Urban Scene Synthesis with Multi-Modal Multi-View Video Diffusion Transformer

2026-03-13Unverified0· sign in to hype

Guile Wu, David Huang, Dongfeng Bai, Bingbing Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Urban scene synthesis with video generation models has recently shown great potential for autonomous driving. Existing video generation approaches to autonomous driving primarily focus on RGB video generation and lack the ability to support multi-modal video generation. However, multi-modal data, such as depth maps and semantic maps, are crucial for holistic urban scene understanding in autonomous driving. Although it is feasible to use multiple models to generate different modalities, this increases the difficulty of model deployment and does not leverage complementary cues for multi-modal data generation. To address this problem, in this work, we propose a novel multi-modal multi-view video generation approach to autonomous driving. Specifically, we construct a unified diffusion transformer model composed of modal-shared components and modal-specific components. Then, we leverage diverse conditioning inputs to encode controllable scene structure and content cues into the multi-modal multi-view unified diffusion model. In this way, our approach is capable of generating multi-modal multi-view driving scene videos in a unified framework. Our thorough experiments on real-world autonomous driving dataset show that our approach achieves compelling video generation quality and controllability compared with state-of-the-art methods, while supporting multi-modal multi-view data generation.

Reproductions