From Coarse to Fine: Robust Hierarchical Localization at Large Scale
Paul-Edouard Sarlin, Cesar Cadena, Roland Siegwart, Marcin Dymczyk
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ethz-asl/hf_netOfficialIn papertf★ 0
- github.com/ethz-asl/hfnetOfficialIn papertf★ 0
- github.com/cvg/Hierarchical-Localizationpytorch★ 0
Abstract
Robust and accurate visual localization is a fundamental capability for numerous applications, such as autonomous driving, mobile robotics, or augmented reality. It remains, however, a challenging task, particularly for large-scale environments and in presence of significant appearance changes. State-of-the-art methods not only struggle with such scenarios, but are often too resource intensive for certain real-time applications. In this paper we propose HF-Net, a hierarchical localization approach based on a monolithic CNN that simultaneously predicts local features and global descriptors for accurate 6-DoF localization. We exploit the coarse-to-fine localization paradigm: we first perform a global retrieval to obtain location hypotheses and only later match local features within those candidate places. This hierarchical approach incurs significant runtime savings and makes our system suitable for real-time operation. By leveraging learned descriptors, our method achieves remarkable localization robustness across large variations of appearance and sets a new state-of-the-art on two challenging benchmarks for large-scale localization.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Berlin Kudamm | HF-Net | Recall@1 | 46.78 | — | Unverified |