SOTAVerified

Omni^2: Unifying Omnidirectional Image Generation and Editing in an Omni Model

2025-04-15Code Available0· sign in to hype

Liu Yang, Huiyu Duan, Yucheng Zhu, Xiaohong Liu, Lu Liu, Zitong Xu, Guangji Ma, Xiongkuo Min, Guangtao Zhai, Patrick Le Callet

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

360^ omnidirectional images (ODIs) have gained considerable attention recently, and are widely used in various virtual reality (VR) and augmented reality (AR) applications. However, capturing such images is expensive and requires specialized equipment, making ODI synthesis increasingly important. While common 2D image generation and editing methods are rapidly advancing, these models struggle to deliver satisfactory results when generating or editing ODIs due to the unique format and broad 360^ Field-of-View (FoV) of ODIs. To bridge this gap, we construct Any2Omni, the first comprehensive ODI generation-editing dataset comprises 60,000+ training data covering diverse input conditions and up to 9 ODI generation and editing tasks. Built upon Any2Omni, we propose an Omni model for Omni-directional image generation and editing (Omni^2), with the capability of handling various ODI generation and editing tasks under diverse input conditions using one model. Extensive experiments demonstrate the superiority and effectiveness of the proposed Omni^2 model for both the ODI generation and editing tasks.

Tasks

Reproductions