SOTAVerified

Implicit Diffusion Models for Continuous Super-Resolution

2023-03-29CVPR 2023Code Available2· sign in to hype

Sicheng Gao, Xuhui Liu, Bohan Zeng, Sheng Xu, Yanjing Li, Xiaoyan Luo, Jianzhuang Liu, XianTong Zhen, Baochang Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Image super-resolution (SR) has attracted increasing attention due to its wide applications. However, current SR methods generally suffer from over-smoothing and artifacts, and most work only with fixed magnifications. This paper introduces an Implicit Diffusion Model (IDM) for high-fidelity continuous image super-resolution. IDM integrates an implicit neural representation and a denoising diffusion model in a unified end-to-end framework, where the implicit neural representation is adopted in the decoding process to learn continuous-resolution representation. Furthermore, we design a scale-controllable conditioning mechanism that consists of a low-resolution (LR) conditioning network and a scaling factor. The scaling factor regulates the resolution and accordingly modulates the proportion of the LR information and generated features in the final output, which enables the model to accommodate the continuous-resolution requirement. Extensive experiments validate the effectiveness of our IDM and demonstrate its superior performance over prior arts.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CelebA-HQ 128x128IDMPSNR24.01Unverified
DIV2K val - 4x upscalingIDMPSNR27.59Unverified
DIV2K val - 4x upscalingLAR-SRPSNR27.03Unverified
DIV2K val - 4x upscalingHCFlowPSNR27.02Unverified
DIV2K val - 4x upscalingBicubicPSNR26.7Unverified
DIV2K val - 4x upscalingHCFlow++PSNR26.61Unverified
DIV2K val - 4x upscalingRankSRGANPSNR26.55Unverified
DIV2K val - 4x upscalingESRGANPSNR26.22Unverified

Reproductions