SuperPoint: Self-Supervised Interest Point Detection and Description
Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/magicleap/SuperPointPretrainedNetworkOfficialpytorch★ 0
- github.com/huggingface/transformerspytorch★ 158,292
- github.com/fabio-sim/LightGlue-ONNXpytorch★ 588
- github.com/borglab/gtsfmnone★ 507
- github.com/ucuapps/opengluepytorch★ 362
- github.com/gabriel-sgama/semantic-superpointpytorch★ 45
- github.com/tzvikif/SuperGluepytorch★ 8
- github.com/AliYoussef97/SuperPoint-PrPpytorch★ 4
- github.com/Thomacdebabo/KP2Dtinytf★ 1
- github.com/alekseychuiko/SuperPointMagicpytorch★ 0
Abstract
This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.