Learning Token-based Representation for Image Retrieval
Hui Wu, Min Wang, Wengang Zhou, Yang Hu, Houqiang Li
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/mcc-wh/tokenOfficialIn paperpytorch★ 70
Abstract
In image retrieval, deep local features learned in a data-driven manner have been demonstrated effective to improve retrieval performance. To realize efficient retrieval on large image database, some approaches quantize deep local features with a large codebook and match images with aggregated match kernel. However, the complexity of these approaches is non-trivial with large memory footprint, which limits their capability to jointly perform feature learning and aggregation. To generate compact global representations while maintaining regional matching capability, we propose a unified framework to jointly learn local feature representation and aggregation. In our framework, we first extract deep local features using CNNs. Then, we design a tokenizer module to aggregate them into a few visual tokens, each corresponding to a specific visual pattern. This helps to remove background noise, and capture more discriminative regions in the image. Next, a refinement block is introduced to enhance the visual tokens with self-attention and cross-attention. Finally, different visual tokens are concatenated to generate a compact global representation. The whole framework is trained end-to-end with image-level labels. Extensive experiments are conducted to evaluate our approach, which outperforms the state-of-the-art methods on the Revisited Oxford and Paris datasets.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ROxford (Hard) | Token | mAP | 66.57 | — | Unverified |
| ROxford (Medium) | Token | mAP | 82.28 | — | Unverified |
| RParis (Hard) | Token | mAP | 78.56 | — | Unverified |
| RParis (Medium) | Token | mAP | 89.34 | — | Unverified |