SOTAVerified

CurlingNet: Compositional Learning between Images and Text for Fashion IQ Data

2020-03-27Code Available1· sign in to hype

Youngjae Yu, Seunghwan Lee, Yuncheol Choi, Gunhee Kim

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present an approach named CurlingNet that can measure the semantic distance of composition of image-text embedding. In order to learn an effective image-text composition for the data in the fashion domain, our model proposes two key components as follows. First, the Delivery makes the transition of a source image in an embedding space. Second, the Sweeping emphasizes query-related components of fashion images in the embedding space. We utilize a channel-wise gating mechanism to make it possible. Our single model outperforms previous state-of-the-art image-text composition models including TIRG and FiLM. We participate in the first fashion-IQ challenge in ICCV 2019, for which ensemble of our model achieves one of the best performances.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Fashion IQCurlingNet(Recall@10+Recall@50)/238.45Unverified

Reproductions