SOTAVerified

Free-T2M: Frequency Enhanced Text-to-Motion Diffusion Model With Consistency Loss

2025-01-30Code Available2· sign in to hype

Wenshuo Chen, Haozhe Jia, Songning Lai, Keming Wu, Hongru Xiao, Lijie Hu, Yutao Yue

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Rapid progress in text-to-motion generation has been largely driven by diffusion models. However, existing methods focus solely on temporal modeling, thereby overlooking frequency-domain analysis. We identify two key phases in motion denoising: the **semantic planning stage** and the **fine-grained improving stage**. To address these phases effectively, we propose **Fre**quency **e**nhanced **t**ext-**to**-**m**otion diffusion model (**Free-T2M**), incorporating stage-specific consistency losses that enhance the robustness of static features and improve fine-grained accuracy. Extensive experiments demonstrate the effectiveness of our method. Specifically, on StableMoFusion, our method reduces the FID from **0.189** to **0.051**, establishing a new SOTA performance within the diffusion architecture. These findings highlight the importance of incorporating frequency-domain insights into text-to-motion generation for more precise and robust results.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HumanML3DFree-T2M (StableMoFusion)FID0.05Unverified
KIT Motion-LanguageFree-T2M (StableMoFusion)FID0.16Unverified

Reproductions