MoSca: Dynamic Gaussian Fusion from Casual Videos via 4D Motion Scaffolds

1University of Pennsylvania, 2Stanford University, 3Archimedes Athena RC

Abstract

We introduce 4D Motion Scaffolds (MoSca), a modern 4D reconstruction system designed to reconstruct and synthesize novel views of dynamic scenes from monocular videos captured casually in the wild. To address such a challenging and ill-posed inverse problem, we leverage prior knowledge from foundational vision models and lift the video data to a novel Motion Scaffold (MoSca) representation, which compactly and smoothly encodes the underlying motions/deformations. The scene geometry and appearance are then disentangled from the deformation field and are encoded by globally fusing the Gaussians anchored onto the MoSca and optimized via Gaussian Splatting. Additionally, camera focal length and poses can be solved using bundle adjustment without the need of any other pose estimation tools. Experiments demonstrate state-of-the-art performance on dynamic rendering benchmarks and its effectiveness on real videos.

SORA

Vertical Short Videos

DAVIS

Internet Robot

Movie Clips

BibTeX

@article{lei2024mosca,
  title={{MoSca}: Dynamic Gaussian Fusion from Casual Videos via {4D} Motion Scaffolds},
  author={Lei, Jiahui and Weng, Yijia and Harley, Adam and Guibas, Leonidas and Daniilidis, Kostas},
  journal={arXiv preprint arXiv:2405.17421},
  year={2024}
}