SOTAVerified

OmniGlue: Generalizable Feature Matching with Foundation Model Guidance

2024-05-21CVPR 2024Code Available4· sign in to hype

Hanwen Jiang, Arjun Karpur, Bingyi Cao, QiXing Huang, Andre Araujo

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The image matching field has been witnessing a continuous emergence of novel learnable feature matching techniques, with ever-improving performance on conventional benchmarks. However, our investigation shows that despite these gains, their potential for real-world applications is restricted by their limited generalization capabilities to novel image domains. In this paper, we introduce OmniGlue, the first learnable image matcher that is designed with generalization as a core principle. OmniGlue leverages broad knowledge from a vision foundation model to guide the feature matching process, boosting generalization to domains not seen at training time. Additionally, we propose a novel keypoint position-guided attention mechanism which disentangles spatial and appearance information, leading to enhanced matching descriptors. We perform comprehensive experiments on a suite of 7 datasets with varied image domains, including scene-level, object-centric and aerial images. OmniGlue's novel components lead to relative gains on unseen domains of 20.9\% with respect to a directly comparable reference model, while also outperforming the recent LightGlue method by 9.5\% relatively.Code and model can be found at https://hwjiang1510.github.io/OmniGlue

Reproductions