Novel UAV-based 3D reconstruction using dense LiDAR point cloud and imagery: A geometry-aware 3D gaussian splatting approach

The emergence of 3D Gaussian Splatting (3DGS) has recently marked a transformative shift in the realms of 3D representation, efficient rendering, and novel view synthesis. Despite its advancements, the geometric precision, photometric consistency, and the quality of novel view synthesis continue to...

Full description

Saved in:
Bibliographic Details
Main Authors: Kai Qin, Jing Li, Sisi Zlatanova, Haitao Wu, Yin Gao, Yuchen Li, Sizhe Shen, Xiangjun Qu, Zhiyuan Yang, Zhenxin Zhang, Banghui Yang, Shaoyi Wang
Format: Article
Language:English
Published: Elsevier 2025-06-01
Series:International Journal of Applied Earth Observations and Geoinformation
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S1569843225002377
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The emergence of 3D Gaussian Splatting (3DGS) has recently marked a transformative shift in the realms of 3D representation, efficient rendering, and novel view synthesis. Despite its advancements, the geometric precision, photometric consistency, and the quality of novel view synthesis continue to pose significant challenges in the context of large-scale Unmanned Aerial Vehicle (UAV)-based 3D reconstruction. To tackle these challenges, we propose a geometry-aware 3D Gaussian Splatting Approach, incorporating three novel methods specifically designed for large-scale UAV-based reconstruction. Firstly, recognizing that existing LiDAR-supervised 3DGS methods are primarily utilized for optimizing Gaussian properties derived from Structure from Motion (SfM), we introduce a precise 3DGS initialization method that leverages highly accurate dense LiDAR point clouds precisely registered with imagery. By extracting depth, normal, and curvature information from the dense UAV LiDAR point cloud, we enhance geometric accuracy through geometric supervision in complex large-scale outdoor scenes. Secondly, addressing the limitations of current photometric supervision in 3DGS, which struggles with illumination variations due to the constraints of spherical harmonics (SH) color representation and lighting changes, we propose a spatial–temporal SH method. This innovative method aims to refine photometric consistency by dynamically adapting to varying lighting conditions, thereby improving the overall quality of 3D reconstruction in diverse environmental settings. Through these methods, our approach seeks to advance the capabilities of 3DGS in large-scale UAV-based 3D reconstruction, offering improved geometric accuracy, photometric consistency, and novel view synthesis quality.Experimental results demonstrate that our method significantly outperforms traditional SLAM-based reconstruction and multi-view reconstruction approaches. In large-scale 3D reconstruction tasks that use both UAV LiDAR point clouds and imagery, our method achieves a 31.25 % improvement in geometric accuracy while consistently surpassing existing methods in the quality metrics of novel view synthesis. Furthermore, our approach attains a PSNR exceeding 30 dB and optimizes photometric consistency, enhancing rendering quality and visual realism.
ISSN:1569-8432