MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model
Lei Jiang, Ye Wei, Hao Ni
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Diffusion models have become a popular choice for human motion synthesis due to their powerful generative capabilities. However, their high computational complexity and large sampling steps pose challenges for real-time applications. Fortunately, the Consistency Model (CM) provides a solution to greatly reduce the number of sampling steps from hundreds to a few, typically fewer than four, significantly accelerating the synthesis of diffusion models. However, its application to text-conditioned human motion synthesis in latent space remains challenging. In this paper, we introduce MotionPCM, a phased consistency model-based approach designed to improve the quality and efficiency of real-time motion synthesis in latent space.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| HumanML3D | MotionPCM | FID | 0.03 | — | Unverified |
| KIT Motion-Language | MotionPCM | FID | 0.29 | — | Unverified |