Guided Image-to-Image Translation with Bi-Directional Feature Transformation
Badour AlBahar, Jia-Bin Huang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/vt-vl-lab/Guided-pix2pixOfficialIn paperpytorch★ 0
Abstract
We address the problem of guided image-to-image translation where we translate an input image into another while respecting the constraints provided by an external, user-provided guidance image. Various conditioning methods for leveraging the given guidance image have been explored, including input concatenation , feature concatenation, and conditional affine transformation of feature activations. All these conditioning mechanisms, however, are uni-directional, i.e., no information flow from the input image back to the guidance. To better utilize the constraints of the guidance image, we present a bi-directional feature transformation (bFT) scheme. We show that our bFT scheme outperforms other conditioning schemes and has comparable results to state-of-the-art methods on different tasks.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Edge-to-Clothes | bFT | FID | 58.4 | — | Unverified |
| Edge-to-Handbags | bFT | FID | 74.9 | — | Unverified |
| Edge-to-Shoes | bFT | FID | 121.2 | — | Unverified |