A Strong Baseline for Fashion Retrieval with Person Re-Identification Models
Mikolaj Wieczorek, Andrzej Michalowski, Anna Wroblewska, Jacek Dabrowski
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/mikwieczorek/centroids-reidOfficialpytorch★ 226
Abstract
Fashion retrieval is the challenging task of finding an exact match for fashion items contained within an image. Difficulties arise from the fine-grained nature of clothing items, very large intra-class and inter-class variance. Additionally, query and source images for the task usually come from different domains - street photos and catalogue photos respectively. Due to these differences, a significant gap in quality, lighting, contrast, background clutter and item presentation exists between domains. As a result, fashion retrieval is an active field of research both in academia and the industry. Inspired by recent advancements in Person Re-Identification research, we adapt leading ReID models to be used in fashion retrieval tasks. We introduce a simple baseline model for fashion retrieval, significantly outperforming previous state-of-the-art results despite a much simpler architecture. We conduct in-depth experiments on Street2Shop and DeepFashion datasets and validate our results. Finally, we propose a cross-domain (cross-dataset) evaluation method to test the robustness of fashion retrieval models.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| DeepFashion - Consumer-to-shop | RST Model (ResNet50-IBN-A, 320x320) | mAP | 43 | — | Unverified |
| Exact Street2Shop | RST Model (ResNet50-IBN-A, 320x320) | Rank-1 | 53.7 | — | Unverified |