SOTAVerified

Patchwise Generative ConvNet: Training Energy-Based Models From a Single Natural Image for Internal Learning

2021-06-19CVPR 2021Unverified0· sign in to hype

Zilong Zheng, Jianwen Xie, Ping Li

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Exploiting internal statistics of a single natural image has long been recognized as a significant research paradigm where the goal is to learn the distribution of patches within the image without relying on external training data. Different from prior works that model such distributions implicitly with a top-down latent variable model (i.e., generator), in this work, we propose to explicitly represent the statistical distribution within a single natural image by using an energy-based generative framework, where a pyramid of energy functions parameterized by bottom-up deep neural networks, are used to capture the distributions of patches at different resolutions. Meanwhile, a coarse-to-fine sequential training and sampling strategy is presented to train the model efficiently. Besides learning to generate random samples from white noise, the model can learn in parallel to recover a real image from its incomplete version, which can improve the descriptive power of the learned models. The proposed model not only is simple and natural in that it does not require auxiliary models (e.g., discriminators) to assist the training, but also unifies internal statistics learning and image generation in a single framework. Qualitative results are presented on various image generation tasks, including super-resolution, image editing, harmonization, etc. The evaluation and user studies demonstrate the superior quality of our results.

Tasks

Reproductions