MorphGS: Morphology-Adaptive Articulated 3D Motion Transfer from Videos
Taeyeon Kim, Youngju Na, Jumin Lee, Sebin Lee, Minhyuk Sung, Sung-Eui Yoon
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Transferring articulated motion from monocular videos to rigged 3D characters is challenging due to pose ambiguity in 2D observations and morphological differences between source and target. Existing approaches often follow a reconstruct-then-retarget paradigm, tying transfer quality to intermediate 3D reconstruction and limiting applicability to categories with parametric templates. We propose MorphGS, a framework that formulates motion retargeting as a target-driven analysis-by-synthesis problem, directly optimizing target morphology and pose through image-space supervision. A rig-coupled morphology parameterization factorizes character identity from time-varying joint rotations, while dense 2D-3D correspondences and synthesized views provide complementary structural and multi-view guidance. Experiments on synthetic benchmarks and in-the-wild videos show consistent improvements over baselines.