VirtualConductor: Music-driven Conducting Video Generation System
2021-07-28Code Available1· sign in to hype
Delong Chen, Fan Liu, Zewen Li, Feng Xu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ChenDelong1999/VirtualConductorpytorch★ 97
- github.com/viiika/diffusion-conductorpytorch★ 26
Abstract
In this demo, we present VirtualConductor, a system that can generate conducting video from any given music and a single user's image. First, a large-scale conductor motion dataset is collected and constructed. Then, we propose Audio Motion Correspondence Network (AMCNet) and adversarial-perceptual learning to learn the cross-modal relationship and generate diverse, plausible, music-synchronized motion. Finally, we combine 3D animation rendering and a pose transfer model to synthesize conducting video from a single given user's image. Therefore, any user can become a virtual conductor through the system.