Free-T2M: Frequency Enhanced Text-to-Motion Diffusion Model With Consistency Loss
Wenshuo Chen, Haozhe Jia, Songning Lai, Keming Wu, Hongru Xiao, Lijie Hu, Yutao Yue
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Hxxxz0/Free-T2mOfficialpytorch★ 70
Abstract
Rapid progress in text-to-motion generation has been largely driven by diffusion models. However, existing methods focus solely on temporal modeling, thereby overlooking frequency-domain analysis. We identify two key phases in motion denoising: the **semantic planning stage** and the **fine-grained improving stage**. To address these phases effectively, we propose **Fre**quency **e**nhanced **t**ext-**to**-**m**otion diffusion model (**Free-T2M**), incorporating stage-specific consistency losses that enhance the robustness of static features and improve fine-grained accuracy. Extensive experiments demonstrate the effectiveness of our method. Specifically, on StableMoFusion, our method reduces the FID from **0.189** to **0.051**, establishing a new SOTA performance within the diffusion architecture. These findings highlight the importance of incorporating frequency-domain insights into text-to-motion generation for more precise and robust results.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| HumanML3D | Free-T2M (StableMoFusion) | FID | 0.05 | — | Unverified |
| KIT Motion-Language | Free-T2M (StableMoFusion) | FID | 0.16 | — | Unverified |