SOTAVerified

Full-Glow: Fully conditional Glow for more realistic image generation

2020-12-10Code Available1· sign in to hype

Moein Sorkhei, Gustav Eje Henter, Hedvig Kjellström

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Autonomous agents, such as driverless cars, require large amounts of labeled visual data for their training. A viable approach for acquiring such data is training a generative model with collected real data, and then augmenting the collected real dataset with synthetic images from the model, generated with control of the scene layout and ground truth labeling. In this paper we propose Full-Glow, a fully conditional Glow-based architecture for generating plausible and realistic images of novel street scenes given a semantic segmentation map indicating the scene layout. Benchmark comparisons show our model to outperform recent works in terms of the semantic segmentation performance of a pretrained PSPNet. This indicates that images from our model are, to a higher degree than from other models, similar to real images of the same kinds of scenes and objects, making them suitable as training data for a visual semantic segmentation or object recognition system.

Tasks

Reproductions