Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis
Chuan Li, Michael Wand
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/chuanli11/CNNMRFOfficialIn papertorch★ 0
- github.com/paulwarkentin/pytorch-neural-doodlepytorch★ 15
- github.com/awentzonline/image-analogiestf★ 0
- github.com/factoryIO/1-simple_neural_style_transfertf★ 0
- github.com/alexjc/neural-doodlenone★ 0
- github.com/DmitryUlyanov/fast-neural-doodletorch★ 0
- github.com/Garfield35/Doodletf★ 0
Abstract
This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthezing photographic content with increased visual plausibility. Unlike standard MRF-based texture synthesis, the combined system can both match and adapt local features with considerable variability, yielding results far out of reach of classic generative MRF methods.