SOTAVerified

3D Shape Generation and Completion through Point-Voxel Diffusion

2021-04-08ICCV 2021Code Available1· sign in to hype

Linqi Zhou, Yilun Du, Jiajun Wu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose a novel approach for probabilistic generative modeling of 3D shapes. Unlike most existing models that learn to deterministically translate a latent vector to a shape, our model, Point-Voxel Diffusion (PVD), is a unified, probabilistic formulation for unconditional shape generation and conditional, multi-modal shape completion. PVD marries denoising diffusion models with the hybrid, point-voxel representation of 3D shapes. It can be viewed as a series of denoising steps, reversing the diffusion process from observed point cloud data to Gaussian noise, and is trained by optimizing a variational lower bound to the (conditional) likelihood function. Experiments demonstrate that PVD is capable of synthesizing high-fidelity shapes, completing partial point clouds, and generating multiple completion results from single-view depth scans of real objects.

Tasks

Reproductions