Glow: Generative Flow with Invertible 1x1 Convolutions
Diederik P. Kingma, Prafulla Dhariwal
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/openai/glowOfficialIn papertf★ 0
- github.com/5yearsKim/Conditional-Normalizing-Flowpytorch★ 81
- github.com/keonlee9420/VAENAR-TTSpytorch★ 73
- github.com/Zhangyanbo/iResNetLabpytorch★ 71
- github.com/L0SG/NanoFlowpytorch★ 67
- github.com/KiUngSong/Generative-Modelspytorch★ 37
- github.com/lifeitech/fce-2dpytorch★ 0
- github.com/ikostrikov/pytorch-flowspytorch★ 0
- github.com/simonwestberg/DD2412-Glowtf★ 0
- github.com/eyalbetzalel/GLOW2none★ 0
Abstract
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ImageNet 32x32 | Glow | NLL (bits/dim) | 4.09 | — | Unverified |