SOTAVerified

Gesture Generation

Generation of gestures, as a sequence of 3d poses

Papers

Showing 51100 of 107 papers

TitleStatusHype
Large language models in textual analysis for gesture selection0
LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture GenerationCode1
Speech-Gesture GAN: Gesture Generation for Robots and Embodied Agents0
UnifiedGesture: A Unified Gesture Synthesis Model for Multiple SkeletonsCode1
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation0
SynthoGestures: A Novel Framework for Synthetic Dynamic Hand Gesture Generation for Driving ScenariosCode0
C2G2: Controllable Co-speech Gesture Generation with Latent Diffusion ModelCode1
The GENEA Challenge 2023: A large scale evaluation of gesture generation models in monadic and dyadic settingsCode2
Audio is all in one: speech-driven gesture synthetics using WavLM pre-trained model0
EMoG: Synthesizing Emotive Co-speech 3D Gesture with Diffusion Model0
EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture GenerationCode1
MPE4G: Multimodal Pretrained Encoder for Co-Speech Gesture Generation0
QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture GenerationCode1
AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture SynthesisCode1
GestureDiffuCLIP: Gesture Diffusion Model with CLIP LatentsCode2
GesGPT: Speech Gesture Synthesis With Text Parsing from ChatGPT0
Taming Diffusion Models for Audio-Driven Co-Speech Gesture GenerationCode2
Evaluating gesture generation in a large-scale open challenge: The GENEA Challenge 20220
Audio2Gestures: Generating Diverse Gestures from Audio0
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation0
Continual Learning for Personalized Co-speech Gesture Generation0
Generating Holistic 3D Human Motion from SpeechCode2
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion ModelsCode2
Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural EmbeddingsCode2
ZeroEGGS: Zero-shot Example-based Gesture Generation from SpeechCode2
Ecsnet: Spatio-temporal feature learning for event cameraCode0
The ReprGesture entry to the GENEA Challenge 2022Code1
The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generationCode1
Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding0
Learning Hierarchical Cross-Modal Association for Co-Speech Gesture GenerationCode1
BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures SynthesisCode3
Low-Resource Adaptation for Personalized Co-Speech Gesture Generation0
SEEG: Semantic Energized Co-Speech Gesture GenerationCode1
Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture GenerationCode1
Speech Drives Templates: Co-Speech Gesture Synthesis with Learned TemplatesCode1
Audio2Gestures: Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders0
Multimodal analysis of the predictability of hand-gesture properties0
Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression LearningCode1
Probabilistic Human-like Gesture Synthesis from Speech using GRU-based WGANCode1
Speech2Properties2Gestures: Gesture-Property Prediction as a Tool for Generating Representational Gestures from Speech0
A Framework for Integrating Gesture Generation Models into Interactive Conversational AgentsCode1
Learning Speech-driven 3D Conversational Gestures from Video0
Generating coherent spontaneous speech and gesture from text0
DeepNAG: Deep Non-Adversarial Gesture GenerationCode1
Quantitative analysis of robot gesticulation behavior0
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform GesturesCode1
robosuite: A Modular Simulation Framework and Benchmark for Robot LearningCode2
Interpreting and Generating Gestures with Embodied Human Computer Interactions0
Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker IdentityCode1
Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture ApproachCode1
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.