SOTAVerified

FreeGave: 3D Physics Learning from Dynamic Videos by Gaussian Velocity

2025-06-09CVPR 2025Code Available1· sign in to hype

Jinxi Li, Ziyang Song, Siyuan Zhou, Bo Yang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we aim to model 3D scene geometry, appearance, and the underlying physics purely from multi-view videos. By applying various governing PDEs as PINN losses or incorporating physics simulation into neural networks, existing works often fail to learn complex physical motions at boundaries or require object priors such as masks or types. In this paper, we propose FreeGave to learn the physics of complex dynamic 3D scenes without needing any object priors. The key to our approach is to introduce a physics code followed by a carefully designed divergence-free module for estimating a per-Gaussian velocity field, without relying on the inefficient PINN losses. Extensive experiments on three public datasets and a newly collected challenging real-world dataset demonstrate the superior performance of our method for future frame extrapolation and motion segmentation. Most notably, our investigation into the learned physics codes reveals that they truly learn meaningful 3D physical motion patterns in the absence of any human labels in training.

Tasks

Reproductions