SOTAVerified

DynaVSR: Dynamic Adaptive Blind Video Super-Resolution

2020-11-09Code Available1· sign in to hype

Suyoung Lee, Myungsub Choi, Kyoung Mu Lee

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Most conventional supervised super-resolution (SR) algorithms assume that low-resolution (LR) data is obtained by downscaling high-resolution (HR) data with a fixed known kernel, but such an assumption often does not hold in real scenarios. Some recent blind SR algorithms have been proposed to estimate different downscaling kernels for each input LR image. However, they suffer from heavy computational overhead, making them infeasible for direct application to videos. In this work, we present DynaVSR, a novel meta-learning-based framework for real-world video SR that enables efficient downscaling model estimation and adaptation to the current input. Specifically, we train a multi-frame downscaling module with various types of synthetic blur kernels, which is seamlessly combined with a video SR network for input-aware adaptation. Experimental results show that DynaVSR consistently improves the performance of the state-of-the-art video SR models by a large margin, with an order of magnitude faster inference time compared to the existing blind SR approaches.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MSU Video Super Resolution Benchmark: Detail RestorationDynaVSR-RSubjective score6.14Unverified
MSU Video Super Resolution Benchmark: Detail RestorationDynaVSR-VSubjective score4.36Unverified
MSU Video Upscalers: Quality EnhancementDynaVSRSSIM0.92Unverified

Reproductions