Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
Yinhuai Wang, Jiwen Yu, Jian Zhang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/wyhuai/ddnmOfficialIn paperpytorch★ 1,331
- github.com/xypeng9903/k-diffusion-inverse-problemspytorch★ 63
- github.com/ipc-lab/deepjscc-diffusionpytorch★ 14
- github.com/andreamazzitelli/ProjectNNnone★ 1
Abstract
Most existing Image Restoration (IR) models are task-specific, which can not be generalized to different degradation operators. In this work, we propose the Denoising Diffusion Null-Space Model (DDNM), a novel zero-shot framework for arbitrary linear IR problems, including but not limited to image super-resolution, colorization, inpainting, compressed sensing, and deblurring. DDNM only needs a pre-trained off-the-shelf diffusion model as the generative prior, without any extra training or network modifications. By refining only the null-space contents during the reverse diffusion process, we can yield diverse results satisfying both data consistency and realness. We further propose an enhanced and robust version, dubbed DDNM+, to support noisy restoration and improve restoration quality for hard tasks. Our experiments on several IR tasks reveal that DDNM outperforms other state-of-the-art zero-shot IR methods. We also demonstrate that DDNM+ can solve complex real-world applications, e.g., old photo restoration.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CelebA | A+y | Consistency | 0 | — | Unverified |
| CelebA | DDNM | Consistency | 26.25 | — | Unverified |
| CelebA | DDRM | Consistency | 455.9 | — | Unverified |
| ImageNet | DDRM | Consistency | 260.4 | — | Unverified |
| ImageNet | DDNM | Consistency | 42.32 | — | Unverified |
| ImageNet | DGP | FID | 69.54 | — | Unverified |
| ImageNet | A+y | Consistency | 0 | — | Unverified |