GLEAM: Learning to Match and Explain in Cross-View Geo-Localization
Xudong Lu, Zhi Zheng, Yi Wan, Yongxiang Yao, Annan Wang, Renrui Zhang, Panwang Xia, Qiong Wu, Qingyun Li, Weifeng Lin, Xiangyu Zhao, Peifeng Ma, Xue Yang, Hongsheng Li
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/lucky-lance/gleamOfficialIn paper★ 15
Abstract
Cross-View Geo-Localization (CVGL) focuses on identifying correspondences between images captured from distinct perspectives of the same geographical location. However, existing CVGL approaches are typically restricted to a single view or modality, and their direct visual matching strategy lacks interpretability: they only determine whether two images correspond, without explaining the rationale behind the match. In this paper, we present GLEAM-C, a foundational CVGL model that unifies multiple views and modalities by aligning them exclusively with satellite imagery. Our framework improves training efficiency through optimized implementation and achieves accuracy comparable to prior modality-specific CVGL models via a novel two-phase training strategy. To address interpretability, we further propose GLEAM-X, a novel task that combines cross-view correspondence prediction with explainable reasoning enabled by multimodal large language models (MLLMs). We construct a bilingual benchmark using commercial MLLMs to generate training and testing data, and refine the test set through rigorous human revision for systematic evaluation of explainable cross-view reasoning. Together, GLEAM-C and GLEAM-X form a comprehensive CVGL pipeline that integrates multi-modal, multi-view alignment with interpretable correspondence analysis, unifying accurate cross-view matching with explainable reasoning and advancing Geo-Localization by enabling models to better Explain And Match. Code and datasets used in this work will be made publicly accessible at https://github.com/Lucky-Lance/GLEAM.