Instance Normalization: The Missing Ingredient for Fast Stylization
2016-07-27Code Available0· sign in to hype
Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/DmitryUlyanov/texture_netsOfficialIn papertorch★ 0
- github.com/labmlai/annotated_deep_learning_paper_implementationspytorch★ 66,103
- github.com/hollygrimm/cyclegan-keras-art-attrsnone★ 5
- github.com/tbullmann/imagetranslation-tensorflowtf★ 0
- github.com/cryu854/FastStyletf★ 0
- github.com/MarvinLavechin/imagetranslation-tensorflowtf★ 0
- github.com/riven314/PerceptualLoss-FastAIpytorch★ 0
- github.com/milmor/perceptual-losses-neural-styletf★ 0
- github.com/the-super-toys/glimpse-modelspytorch★ 0
- github.com/brightyoun/Video-Style-Transferpytorch★ 0
Abstract
It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at https://github.com/DmitryUlyanov/texture_nets. Full paper can be found at arXiv:1701.02096.