SOTAVerified

Motion Generation

Papers

Showing 301350 of 446 papers

TitleStatusHype
Shape Conditioned Human Motion Generation with Diffusion Model0
StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework0
MoDiPO: text-to-motion alignment via AI-feedback-driven Direct Preference Optimization0
Efficient Text-driven Motion Generation via Latent Consistency TrainingCode0
WheelPose: Data Synthesis Techniques to Improve Pose Estimation Performance on Wheelchair UsersCode0
WANDR: Intention-guided Human Motion Generation0
GaussianTalker: Speaker-specific Talking Head Synthesis via 3D Gaussian Splatting0
Purposer: Putting Human Motion Generation in Context0
Text-controlled Motion Mamba: Text-Instructed Temporal Grounding of Human Motion0
LADDER: An Efficient Framework for Video Frame Interpolation0
Generating Human Interaction Motions in Scenes with Text Control0
GHOST: Grounded Human Motion Generation with Open Vocabulary Scene-and-Text Contexts0
Large Motion Model for Unified Multi-Modal Motion Generation0
Towards Variable and Coordinated Holistic Co-Speech Motion Generation0
InterDreamer: Zero-Shot Text to 3D Dynamic Human-Object Interaction0
Choreographing the Digital Canvas: A Machine Learning Approach to Artistic Performance0
Gaze-guided Hand-Object Interaction Synthesis: Dataset and Method0
Contact-aware Human Motion Generation from Textual Descriptions0
GPT-Connect: Interaction between Text-Driven Human Motion Generator and 3D Scenes in a Training-free Manner0
Guided Decoding for Robot On-line Motion Generation and Adaption0
CoMo: Controllable Motion Generation through Language Guided Pose Code Editing0
AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents0
LM2D: Lyrics- and Music-Driven Dance Synthesis0
CustomListener: Text-guided Responsive Interaction for User-friendly Listening Head Generation0
RealDex: Towards Human-like Grasping for Robotic Dexterous Hand0
Bidirectional Autoregressive Diffusion Model for Dance Generation0
Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation0
DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation0
Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness0
Progress and Prospects in 3D Generative AI: A Technical Overview including 3D human0
Bidirectional Autoregessive Diffusion Model for Dance Generation0
Move as You Say Interact as You Can: Language-guided Human Motion Generation with Scene Affordance0
InsActor: Instruction-driven Physics-based Characters0
4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency0
Plan, Posture and Go: Towards Open-World Text-to-Motion Generation0
MotionScript: Natural Language Descriptions for Expressive 3D Human Motions0
HuTuMotion: Human-Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback0
Realistic Human Motion Generation with Cross-Diffusion Models0
Movement Primitive Diffusion: Learning Gentle Robotic Manipulation of Deformable Objects0
Motion Flow Matching for Human Motion Synthesis and Editing0
OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers0
Reacting like Humans: Incorporating Intrinsic Human Behaviors into NAO through Sound-Based Reactions to Fearful and Shocking Events for Enhanced Sociability0
HOI-Diff: Text-Driven Synthesis of 3D Human-Object Interactions using Diffusion Models0
HandDiffuse: Generative Controllers for Two-Hand Interactions via Diffusion Models0
Digital Life Project: Autonomous 3D Characters with Social Intelligence0
DiffusionPhase: Motion Diffusion in Frequency Domain0
FG-MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions0
OmniMotionGPT: Animal Motion Generation with Limited Data0
SpeechAct: Towards Generating Whole-body Motion from Speech0
A Unified Framework for Multimodal, Multi-Part Human Motion Synthesis0
Show:102550
← PrevPage 7 of 9Next →

No leaderboard results yet.