Imagine360: Immersive 360 Video Generation from Perspective Anchor
Jing Tan, Shuai Yang, Tong Wu, Jingwen He, Yuwei Guo, Ziwei Liu, Dahua Lin
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ys-imtech/imagine360Official★ 171
Abstract
360^ videos offer a hyper-immersive experience that allows the viewers to explore a dynamic scene from full 360 degrees. To achieve more user-friendly and personalized content creation in 360^ video format, we seek to lift standard perspective videos into 360^ equirectangular videos. To this end, we introduce Imagine360, the first perspective-to-360^ video generation framework that creates high-quality 360^ videos with rich and diverse motion patterns from video anchors. Imagine360 learns fine-grained spherical visual and motion patterns from limited 360^ video data with several key designs. 1) Firstly we adopt the dual-branch design, including a perspective and a panorama video denoising branch to provide local and global constraints for 360^ video generation, with motion module and spatial LoRA layers fine-tuned on extended web 360^ videos. 2) Additionally, an antipodal mask is devised to capture long-range motion dependencies, enhancing the reversed camera motion between antipodal pixels across hemispheres. 3) To handle diverse perspective video inputs, we propose elevation-aware designs that adapt to varying video masking due to changing elevations across frames. Extensive experiments show Imagine360 achieves superior graphics quality and motion coherence among state-of-the-art 360^ video generation methods. We believe Imagine360 holds promise for advancing personalized, immersive 360^ video creation.