SOTAVerified

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

2016-12-07CVPR 2017Code Available0· sign in to hype

Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
GoProNah et alPSNR29.08Unverified
HIDE (trained on GOPRO)Nah et alPSNR (sRGB)25.73Unverified
RealBlur-R (trained on GoPro)Nah et alSSIM (sRGB)0.84Unverified

Reproductions