StyleShot: A Snapshot on Any Style
Junyao Gao, Yanchen Liu, Yanan sun, Yinhao Tang, Yanhong Zeng, Kai Chen, Cairong Zhao
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/open-mmlab/StyleShotOfficialpytorch★ 457
- github.com/Vill-Lab/forkOfStyleShotpytorch★ 0
Abstract
In this paper, we show that, a good style representation is crucial and sufficient for generalized style transfer without test-time tuning. We achieve this through constructing a style-aware encoder and a well-organized style dataset called StyleGallery. With dedicated design for style learning, this style-aware encoder is trained to extract expressive style representation with decoupling training strategy, and StyleGallery enables the generalization ability. We further employ a content-fusion encoder to enhance image-driven style transfer. We highlight that, our approach, named StyleShot, is simple yet effective in mimicking various desired styles, i.e., 3D, flat, abstract or even fine-grained styles, without test-time tuning. Rigorous experiments validate that, StyleShot achieves superior performance across a wide range of styles compared to existing state-of-the-art methods. The project page is available at: https://styleshot.github.io/.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| StyleBench | StyleShot | CLIP Score | 0.66 | — | Unverified |