SOTAVerified

SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration

2020-11-24CVPR 2021Code Available1· sign in to hype

Sheng Ao, Qingyong Hu, Bo Yang, Andrew Markham, Yulan Guo

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Extracting robust and general 3D local features is key to downstream tasks such as point cloud registration and reconstruction. Existing learning-based local descriptors are either sensitive to rotation transformations, or rely on classical handcrafted features which are neither general nor representative. In this paper, we introduce a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features which are rotationally invariant whilst sufficiently informative to enable accurate registration. A Spatial Point Transformer is first introduced to map the input local surface into a carefully designed cylindrical space, enabling end-to-end optimization with SO(2) equivariant representation. A Neural Feature Extractor which leverages the powerful point-based and 3D cylindrical convolutional neural layers is then utilized to derive a compact and representative descriptor for matching. Extensive experiments on both indoor and outdoor datasets demonstrate that SpinNet outperforms existing state-of-the-art techniques by a large margin. More critically, it has the best generalization ability across unseen scenarios with different sensor modalities. The code is available at https://github.com/QingyongHu/SpinNet.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
3DMatch BenchmarkSpinNet (no code published as of Dec 15 2020)Feature Matching Recall97.6Unverified
3DMatch (trained on KITTI)SpinNetRecall0.85Unverified
ETH (trained on 3DMatch)SpinNetFeature Matching Recall0.93Unverified
FPv1SpinNetRecall (3cm, 10 degrees)42.46Unverified
KITTISpinNetSuccess Rate99.1Unverified
KITTI (trained on 3DMatch)SpinNetSuccess Rate81.44Unverified

Reproductions