SOTAVerified

Lepard: Learning partial point cloud matching in rigid and deformable scenes

2021-11-24CVPR 2022Code Available1· sign in to hype

Yang Li, Tatsuya Harada

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present Lepard, a Learning based approach for partial point cloud matching in rigid and deformable scenes. The key characteristics are the following techniques that exploit 3D positional knowledge for point cloud matching: 1) An architecture that disentangles point cloud representation into feature space and 3D position space. 2) A position encoding method that explicitly reveals 3D relative distance information through the dot product of vectors. 3) A repositioning technique that modifies the crosspoint-cloud relative positions. Ablation studies demonstrate the effectiveness of the above techniques. In rigid cases, Lepard combined with RANSAC and ICP demonstrates state-of-the-art registration recall of 93.9% / 71.3% on the 3DMatch / 3DLoMatch. In deformable cases, Lepard achieves +27.1% / +34.8% higher non-rigid feature matching recall than the prior art on our newly constructed 4DMatch / 4DLoMatch benchmark.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
4DMatchLi and Harada (θc=0.05)NFMR83.9Unverified
4DMatchLi and Harada (θc=0.1)NFMR83.7Unverified
4DMatchLi and Harada (θc=0.2)NFMR82.2Unverified
4DMatchPredator (5000)NFMR56.8Unverified
4DMatchPredator (3000)NFMR56.4Unverified
4DMatchD3Feat (5000)NFMR56.1Unverified
4DMatchD3Feat (3000)NFMR55.5Unverified
4DMatchPredator (1000)NFMR53.3Unverified
4DMatchD3Feat (1000)NFMR51.6Unverified

Reproductions