Perceptual Losses for Real-Time Style Transfer and Super-Resolution
Justin Johnson, Alexandre Alahi, Li Fei-Fei
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/libreai/neural-painters-xtf★ 79
- github.com/ninatu/mood_challengepytorch★ 16
- github.com/MartinBuessemeyer/Artistic-Texture-Controlpytorch★ 15
- github.com/ZyoungXu/MoSt-DSApytorch★ 12
- github.com/ashkanpakzad/atnpytorch★ 5
- github.com/vieduy/Neural-Style-Transferpaddle★ 4
- github.com/ksivaman/super-respytorch★ 3
- github.com/anjalipemmaraju/styletransfernetworkpytorch★ 1
- github.com/CYetlanezi/Proyecto-Optitf★ 0
- github.com/vijishmadhavan/ArtLinepytorch★ 0
Abstract
We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| BSD100 - 4x upscaling | Perceptual Loss | PSNR | 24.95 | — | Unverified |