SOTAVerified

Color Mismatches in Stereoscopic Video: Real-World Dataset and Deep Correction Method

2023-03-12Code Available1· sign in to hype

Egor Chistov, Nikita Alutis, Dmitriy Vatolin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Stereoscopic videos can contain color mismatches between the left and right views due to minor variations in camera settings, lenses, and even object reflections captured from different positions. The presence of color mismatches can lead to viewer discomfort and headaches. This problem can be solved by transferring color between stereoscopic views, but traditional methods often lack quality, while neural-network-based methods can easily overfit on artificial data. The scarcity of stereoscopic videos with real-world color mismatches hinders the evaluation of different methods' performance. Therefore, we filmed a video dataset, which includes both distorted frames with color mismatches and ground-truth data, using a beam-splitter. Our second contribution is a deep multiscale neural network that solves the color-mismatch-correction task by leveraging stereo correspondences. The experimental results demonstrate the effectiveness of the proposed method on a conventional dataset, but there remains room for improvement on challenging real-world data.

Tasks

Reproductions