Gaussian-UDSR: Real-Time Unbounded Dynamic Scene Reconstruction with 3D Gaussian Splatting

Unbounded dynamic scene reconstruction is crucial for applications such as autonomous driving, robotics, and virtual reality. However, existing methods struggle to reconstruct dynamic scenes in unbounded outdoor environments due to challenges such as lighting variation, object motion, and sensor lim...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang Sun, Yue Zhou, Bin Tian, Haiyang Wang, Yongchao Zhao, Songdi Wu
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/11/6262
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Unbounded dynamic scene reconstruction is crucial for applications such as autonomous driving, robotics, and virtual reality. However, existing methods struggle to reconstruct dynamic scenes in unbounded outdoor environments due to challenges such as lighting variation, object motion, and sensor limitations, leading to inaccurate geometry and low rendering fidelity. In this paper, we proposed Gaussian-UDSR, a novel 3D Gaussian-based representation that efficiently reconstructs and renders high-quality, unbounded dynamic scenes in real time. Our approach fused LiDAR point clouds and Structure-from-Motion (SfM) point clouds obtained from an RGB camera, significantly improving depth estimation and geometric accuracy. To address dynamic appearance variations, we introduced a Gaussian color feature prediction network, which adaptively captures global and local feature information, enabling robust rendering under changing lighting conditions. Additionally, a pose-tracking mechanism ensured precise motion estimation for dynamic objects, enhancing realism and consistency. We evaluated Gaussian-UDSR on the Waymo and KITTI datasets, demonstrating state-of-the-art rendering quality with an 8.8% improvement in PSNR, a 75% reduction in LPIPS, and a fourfold speed improvement over existing methods. Our approach enables efficient, high-fidelity 3D reconstruction and fast real-time rendering of large-scale dynamic environments, while significantly reducing model storage overhead.
ISSN:2076-3417