SOTAVerified

Gesture Generation

Generation of gestures, as a sequence of 3d poses

Papers

Showing 150 of 107 papers

TitleStatusHype
EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture ModelingCode3
BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures SynthesisCode3
GestureLSM: Latent Shortcut based Co-Speech Gesture Generation with Spatial-Temporal ModelingCode2
The GENEA Challenge 2023: A large scale evaluation of gesture generation models in monadic and dyadic settingsCode2
Generating Holistic 3D Human Motion from SpeechCode2
Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural EmbeddingsCode2
robosuite: A Modular Simulation Framework and Benchmark for Robot LearningCode2
MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space ModelsCode2
Taming Diffusion Models for Audio-Driven Co-Speech Gesture GenerationCode2
AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent DiffusionCode2
MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal ControlsCode2
GestureDiffuCLIP: Gesture Diffusion Model with CLIP LatentsCode2
ZeroEGGS: Zero-shot Example-based Gesture Generation from SpeechCode2
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion ModelsCode2
Retrieving Semantics from the Deep: an RAG Solution for Gesture SynthesisCode2
Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker IdentityCode1
Gesticulator: A framework for semantically-aware speech-driven gesture generationCode1
ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture SynthesisCode1
A Framework for Integrating Gesture Generation Models into Interactive Conversational AgentsCode1
UnifiedGesture: A Unified Gesture Synthesis Model for Multiple SkeletonsCode1
Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture GenerationCode1
Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture ApproachCode1
Style-Controllable Speech-Driven Gesture Synthesis Using Normalising FlowsCode1
DeepNAG: Deep Non-Adversarial Gesture GenerationCode1
The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generationCode1
SEEG: Semantic Energized Co-Speech Gesture GenerationCode1
Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression LearningCode1
Speech Drives Templates: Co-Speech Gesture Synthesis with Learned TemplatesCode1
C2G2: Controllable Co-speech Gesture Generation with Latent Diffusion ModelCode1
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform GesturesCode1
Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generationCode1
Probabilistic Human-like Gesture Synthesis from Speech using GRU-based WGANCode1
Learning Individual Styles of Conversational GestureCode1
Intentional Gesture: Deliver Your Intentions with Gestures for SpeechCode1
AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture SynthesisCode1
Learning Hierarchical Cross-Modal Association for Co-Speech Gesture GenerationCode1
Emotional Speech-driven 3D Body Animation via Disentangled Latent DiffusionCode1
EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture GenerationCode1
LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture GenerationCode1
QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture GenerationCode1
The ReprGesture entry to the GENEA Challenge 2022Code1
EasyGenNet: An Efficient Framework for Audio-Driven Gesture Video Generation Based on Diffusion Model0
DiM-Gestor: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-20
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation0
DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation0
Bridge to Non-Barrier Communication: Gloss-Prompted Fine-grained Cued Speech Gesture Generation with Diffusion Model0
DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech0
Demonstration of the EmoteWizard of Oz Interface for Empathic Robotic Tutors0
Audio is all in one: speech-driven gesture synthetics using WavLM pre-trained model0
MDT-A2G: Exploring Masked Diffusion Transformers for Co-Speech Gesture Generation0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.