DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks
Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, Jiri Matas
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/KupynOrest/DeblurGANOfficialIn paperpytorch★ 0
- github.com/anastasiia-kornilova/MMDFtf★ 9
- github.com/raven-dehaze-work/DeblurGanToDehazetf★ 0
- github.com/The-GAN-g/DeblurGANpytorch★ 0
- github.com/siddhantkhandelwal/deblur-gannone★ 0
- github.com/lycutter/deblur_sr_ganpytorch★ 0
- github.com/fabriziocacicia/DeblurGAN-TF2.0tf★ 0
- github.com/fatalfeel/DeblurGANpytorch★ 0
- github.com/pgarz/runway_exercisepytorch★ 0
- github.com/au1206/Enhance-GANnone★ 0
Abstract
We present DeblurGAN, an end-to-end learned method for motion deblurring. The learning is based on a conditional GAN and the content loss . DeblurGAN achieves state-of-the art performance both in the structural similarity measure and visual appearance. The quality of the deblurring model is also evaluated in a novel way on a real-world problem -- object detection on (de-)blurred images. The method is 5 times faster than the closest competitor -- DeepDeblur. We also introduce a novel method for generating synthetic motion blurred images from sharp ones, allowing realistic dataset augmentation. The model, code and the dataset are available at https://github.com/KupynOrest/DeblurGAN
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| RealBlur-J (trained on GoPro) | DeblurGAN | SSIM (sRGB) | 0.83 | — | Unverified |
| RealBlur-R (trained on GoPro) | DeblurGAN | SSIM (sRGB) | 0.9 | — | Unverified |
| REDS | DeblurGAN | Average PSNR | 24.09 | — | Unverified |