Everybody Dance Now
Caroline Chan, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/carolineec/EverybodyDanceNowOfficialpytorch★ 0
- github.com/Lotayou/everybody_dance_now_pytorchpytorch★ 0
- github.com/rajatsahay/Pose2Posenone★ 0
- github.com/martin220485/everybody_dance_now_pytorchpytorch★ 0
- github.com/CNC-IISER-BHOPAL/Any-Body-Can-Dancepytorch★ 0
- github.com/Novemser/deep-imitationpytorch★ 0
- github.com/aman-arya/Any-Body-Can-Dancepytorch★ 0
- github.com/j-void/ISL_v2vpytorch★ 0
- github.com/justinjohn0306/EverybodyDanceNow-Colabpytorch★ 0
- github.com/dakenan1/Everybody-Dance-Nowpytorch★ 0
Abstract
This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer.