SOTAVerified

LongDiff: Training-Free Long Video Generation in One Go

2025-03-23CVPR 2025Unverified0· sign in to hype

Zhuoling Li, Hossein Rahmani, Qiuhong Ke, Jun Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Video diffusion models have recently achieved remarkable results in video generation. Despite their encouraging performance, most of these models are mainly designed and trained for short video generation, leading to challenges in maintaining temporal consistency and visual details in long video generation. In this paper, we propose LongDiff, a novel training-free method consisting of carefully designed components \ -- Position Mapping (PM) and Informative Frame Selection (IFS) \ -- to tackle two key challenges that hinder short-to-long video generation generalization: temporal position ambiguity and information dilution. Our LongDiff unlocks the potential of off-the-shelf video diffusion models to achieve high-quality long video generation in one go. Extensive experiments demonstrate the efficacy of our method.

Tasks

Reproductions