DreamGaussian4D: Generative 4D Gaussian Splatting
Abstract
Remarkable progress has been made in 4D content generation recently. However, existing methods suffer from long optimization time, lack of motion controllability, and a low level of detail. In this paper, we introduce DreamGaussian4D, an efficient 4D generation framework that builds on 4D Gaussian Splatting representation. Our key insight is that the explicit modeling of spatial transformations in Gaussian Splatting makes it more suitable for the 4D generation setting compared with implicit representations. DreamGaussian4D reduces the optimization time from several hours to just a few minutes, allows flexible control of the generated 3D motion, and produces animated meshes that can be efficiently rendered in 3D engines.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- 4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency (2023)
- Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video (2023)
- CAD: Photorealistic 3D Generation via Adversarial Distillation (2023)
- GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning (2023)
- AvatarStudio: High-fidelity and Animatable 3D Avatar Creation from Text (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
The demo is super cool!🔥
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper