SOTAVerified

Motion Generation

Papers

Showing 51100 of 446 papers

TitleStatusHype
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion ModelCode2
MotionCLIP: Exposing Human Motion Generation to CLIP SpaceCode2
StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGANCode2
Freeform Body Motion Generation from SpeechCode2
SViMo: Synchronized Diffusion for Video and Motion Generation in Hand-object Interaction ScenariosCode1
EPFL-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language modelsCode1
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial AnimationCode1
MMGT: Motion Mask Guided Two-Stage Network for Co-Speech Gesture Video GenerationCode1
AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion ModelsCode1
Light-T2M: A Lightweight and Fast Model for Text-to-motion GenerationCode1
SoPo: Text-to-Motion Generation Using Semi-Online Preference OptimizationCode1
Constrained Diffusion with Trust SamplingCode1
KMM: Key Frame Mask Mamba for Extended Motion GenerationCode1
MotionBank: A Large-scale Video Motion Benchmark with Disentangled Rule-based AnnotationsCode1
MDMP: Multi-modal Diffusion for supervised Motion Predictions with uncertaintyCode1
MoRAG -- Multi-Fusion Retrieval Augmented Generation for Human MotionCode1
T3M: Text Guided 3D Human Motion Synthesis from SpeechCode1
InfiniMotion: Mamba Boosts Memory in Transformer for Arbitrary Long Motion GenerationCode1
M^3GPT: An Advanced Multimodal, Multitask Framework for Motion Comprehension and GenerationCode1
Motion Avatar: Generate Human and Animal Avatars with Arbitrary MotionCode1
Exploring Text-to-Motion Generation with Human PreferenceCode1
LaserHuman: Language-guided Scene-aware Human Motion Generation in Free EnvironmentCode1
Motion Generation from Fine-grained Textual DescriptionsCode1
Driving Animatronic Robot Facial Expression From SpeechCode1
Understanding and Improving Training-free Loss-based Diffusion GuidanceCode1
Dyadic Interaction Modeling for Social Behavior GenerationCode1
MotionMix: Weakly-Supervised Diffusion for Controllable Motion GenerationCode1
GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion GenerationCode1
FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and EditingCode1
BOTH2Hands: Inferring 3D Hands from Both Text Prompts and Body DynamicsCode1
MMM: Generative Masked Motion ModelCode1
EMDM: Efficient Motion Diffusion Model for Fast and High-Quality Motion GenerationCode1
InterControl: Zero-shot Human Interaction Generation by Controlling Every JointCode1
MCM: Multi-condition Motion Synthesis Framework for Multi-scenarioCode1
AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention MechanismCode1
Dance with You: The Diversity Controllable Dancer Generation via Diffusion ModelsCode1
Synthesizing Long-Term Human Motions with Diffusion Models via Coherent SamplingCode1
RSMT: Real-time Stylized Motion Transition for CharactersCode1
Taming Diffusion Models for Music-driven Conducting Motion GenerationCode1
TM2D: Bimodality Driven 3D Dance Generation via Music-Text IntegrationCode1
MultiAct: Long-Term 3D Human Motion Generation from Multiple Action LabelsCode1
UDE: A Unified Driving Engine for Human Motion GenerationCode1
Autoregressive GAN for Semantic Unconditional Head Motion GenerationCode1
Being Comes from Not-being: Open-vocabulary Text-to-Motion Generation with Wordless TrainingCode1
Motion Policy NetworksCode1
PoseGPT: Quantization-based 3D Human Motion Generation and ForecastingCode1
HUMANISE: Language-conditioned Human Motion Generation in 3D ScenesCode1
FLAME: Free-form Language-based Motion Synthesis & EditingCode1
LATTE: LAnguage Trajectory TransformErCode1
Diverse Dance Synthesis via Keyframes with Transformer ControllersCode1
Show:102550
← PrevPage 2 of 9Next →

No leaderboard results yet.