Blind Video Temporal Consistency via Deep Video Prior
Chenyang Lei, Yazhou Xing, Qifeng Chen
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ChenyangLEI/deep-video-priorOfficialIn papertf★ 328
- github.com/yzxing87/pytorch-deep-video-priorOfficialpytorch★ 127
Abstract
Applying image processing algorithms independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a novel and general approach for blind video temporal consistency. Our method is only trained on a pair of original and processed videos directly instead of a large dataset. Unlike most previous methods that enforce temporal consistency with optical flow, we show that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior. Moreover, a carefully designed iteratively reweighted training strategy is proposed to address the challenging multimodal inconsistency problem. We demonstrate the effectiveness of our approach on 7 computer vision tasks on videos. Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency. Our source codes are publicly available at github.com/ChenyangLEI/deep-video-prior.