SOTAVerified

MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model

2025-01-31Unverified0· sign in to hype

Lei Jiang, Ye Wei, Hao Ni

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Diffusion models have become a popular choice for human motion synthesis due to their powerful generative capabilities. However, their high computational complexity and large sampling steps pose challenges for real-time applications. Fortunately, the Consistency Model (CM) provides a solution to greatly reduce the number of sampling steps from hundreds to a few, typically fewer than four, significantly accelerating the synthesis of diffusion models. However, its application to text-conditioned human motion synthesis in latent space remains challenging. In this paper, we introduce MotionPCM, a phased consistency model-based approach designed to improve the quality and efficiency of real-time motion synthesis in latent space.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HumanML3DMotionPCMFID0.03Unverified
KIT Motion-LanguageMotionPCMFID0.29Unverified

Reproductions