SOTAVerified

Human-VDM: Learning Single-Image 3D Human Gaussian Splatting from Video Diffusion Models

2024-09-04Code Available2· sign in to hype

Zhibin Liu, Haoye Dong, Aviral Chharia, Hefeng Wu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generating lifelike 3D humans from a single RGB image remains a challenging task in computer vision, as it requires accurate modeling of geometry, high-quality texture, and plausible unseen parts. Existing methods typically use multi-view diffusion models for 3D generation, but they often face inconsistent view issues, which hinder high-quality 3D human generation. To address this, we propose Human-VDM, a novel method for generating 3D human from a single RGB image using Video Diffusion Models. Human-VDM provides temporally consistent views for 3D human generation using Gaussian Splatting. It consists of three modules: a view-consistent human video diffusion module, a video augmentation module, and a Gaussian Splatting module. First, a single image is fed into a human video diffusion module to generate a coherent human video. Next, the video augmentation module applies super-resolution and video interpolation to enhance the textures and geometric smoothness of the generated video. Finally, the 3D Human Gaussian Splatting module learns lifelike humans under the guidance of these high-resolution and view-consistent images. Experiments demonstrate that Human-VDM achieves high-quality 3D human from a single image, outperforming state-of-the-art methods in both generation quality and quantity. Project page: https://human-vdm.github.io/Human-VDM/

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
THuman2.0 DatasetHuman-VDMCLIP Similarity0.92Unverified

Reproductions