Multi-style Generative Network for Real-time Transfer
Hang Zhang, Kristin Dana
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/zhanghang1989/PyTorch-Multi-Style-TransferOfficialpytorch★ 0
- github.com/germanjke/StyleTransformerGANspytorch★ 0
- github.com/sonnguyentruong129/msgnet-tftf★ 0
- github.com/noufali/VideoMLpytorch★ 0
- github.com/lxy5513/Multi-Style-Transferpytorch★ 0
- github.com/habout632/ganspytorch★ 0
Abstract
Despite the rapid progress in style transfer, existing approaches using feed-forward generative network for multi-style or arbitrary-style transfer are usually compromised of image quality and model flexibility. We find it is fundamentally difficult to achieve comprehensive style modeling using 1-dimensional style embedding. Motivated by this, we introduce CoMatch Layer that learns to match the second order feature statistics with the target styles. With the CoMatch Layer, we build a Multi-style Generative Network (MSG-Net), which achieves real-time performance. We also employ an specific strategy of upsampled convolution which avoids checkerboard artifacts caused by fractionally-strided convolution. Our method has achieved superior image quality comparing to state-of-the-art approaches. The proposed MSG-Net as a general approach for real-time style transfer is compatible with most existing techniques including content-style interpolation, color-preserving, spatial control and brush stroke size control. MSG-Net is the first to achieve real-time brush-size control in a purely feed-forward manner for style transfer. Our implementations and pre-trained models for Torch, PyTorch and MXNet frameworks will be publicly available.