SOTAVerified

Omnidirectional Video Super-Resolution using Deep Learning

2025-06-03Unverified0· sign in to hype

Arbind Agrahari Baniya, Tsz-Kwan Lee, Peter W. Eklund, Sunil Aryal

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Omnidirectional Videos (or 360 videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360 videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution; however, these techniques do not tackle the distortion present in equirectangular projections of 360 video signals. An additional obstacle is the limited availability of 360 video datasets for study. To address these issues, this paper creates a novel 360 Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360 videos. This paper further proposes a novel deep learning model for 360 Video Super-Resolution (360 VSR), called Spherical Signal Super-resolution with a Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with an attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose-built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360 ~specific super-resolution models on 360 video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural sub-components, targeted training and optimisation.

Tasks

Reproductions