SOTAVerified

PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors

2018-08-30ECCV 2018Code Available0· sign in to hype

Haowen Deng, Tolga Birdal, Slobodan Ilic

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present PPF-FoldNet for unsupervised learning of 3D local descriptors on pure point cloud geometry. Based on the folding-based auto-encoding of well known point pair features, PPF-FoldNet offers many desirable properties: it necessitates neither supervision, nor a sensitive local reference frame, benefits from point-set sparsity, is end-to-end, fast, and can extract powerful rotation invariant descriptors. Thanks to a novel feature visualization, its evolution can be monitored to provide interpretable insights. Our extensive experiments demonstrate that despite having six degree-of-freedom invariance and lack of training labels, our network achieves state of the art results in standard benchmark datasets and outperforms its competitors when rotations and varying point densities are present. PPF-FoldNet achieves 9\% higher recall on standard benchmarks, 23\% higher recall when rotations are introduced into the same datasets and finally, a margin of >35\% is attained when point density is significantly decreased.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
3DMatch BenchmarkPPF-FoldNetFeature Matching Recall71.8Unverified

Reproductions