SOTAVerified

CompleteDT: Point Cloud Completion with Dense Augment Inference Transformers

2022-05-30Unverified0· sign in to hype

Jun Li, Shangwei Guo, Shaokun Han

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Point cloud completion task aims to predict the missing part of incomplete point clouds and generate complete point clouds with details. In this paper, we propose a novel point cloud completion network, namely CompleteDT. Specifically, features are learned from point clouds with different resolutions, which is sampled from the incomplete input, and are converted to a series of spots based on the geometrical structure. Then, the Dense Relation Augment Module (DRA) based on the transformer is proposed to learn features within spots and consider the correlation among these spots. The DRA consists of Point Local-Attention Module (PLA) and Point Dense Multi-Scale Attention Module (PDMA), where the PLA captures the local information within the local spots by adaptively measuring weights of neighbors and the PDMA exploits the global relationship between these spots in a multi-scale densely connected manner. Lastly, the complete shape is predicted from spots by the Multi-resolution Point Fusion Module (MPF), which gradually generates complete point clouds from spots, and updates spots based on these generated point clouds. Experimental results show that, because the DRA based on the transformer can learn the expressive features from the incomplete input and the MPF can fully explore these feature to predict the complete input, our method largely outperforms the state-of-the-art methods.

Tasks

Reproductions