SOTAVerified

Motion Representations for Articulated Animation

2021-04-22CVPR 2021Code Available3· sign in to hype

Aliaksandr Siarohin, Oliver J. Woodford, Jian Ren, Menglei Chai, Sergey Tulyakov

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose novel motion representations for animating articulated objects consisting of distinct parts. In a completely unsupervised manner, our method identifies object parts, tracks them in a driving video, and infers their motions by considering their principal axes. In contrast to the previous keypoint-based works, our method extracts meaningful and consistent regions, describing locations, shape, and pose. The regions correspond to semantically relevant and distinct object parts, that are more easily detected in frames of the driving video. To force decoupling of foreground from background, we model non-object related global motion with an additional affine transformation. To facilitate animation and prevent the leakage of the shape of the driving object, we disentangle shape and pose of objects in the region space. Our model can animate a variety of objects, surpassing previous methods by a large margin on existing benchmarks. We present a challenging new benchmark with high-resolution videos and show that the improvement is particularly pronounced when articulated objects are considered, reaching 96.6% user preference vs. the state of the art.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MGifFOMML10.02Unverified
MGifSiarohin et al.L10.02Unverified
Tai-Chi-HD (256)FOMMAED0.17Unverified
Tai-Chi-HD (256)Siarohin et al.AED0.15Unverified
Tai-Chi-HD (512)FOMMAED0.2Unverified
Tai-Chi-HD (512)Siarohin et al.AED0.17Unverified
TED-talksSiarohin et al.AED0.11Unverified
TED-talksFOMMAED0.16Unverified
VoxCelebFOMMAED0.13Unverified
VoxCelebSiarohin et al.AED0.13Unverified

Reproductions