This paper presents the first photo-realistic LiDAR-Inertial-Camera Gaussian Splatting SLAM system that simultaneously addresses visual quality, geometric accuracy, and real-time performance. The proposed method performs robust and accurate pose estimation within a continuous-time trajectory optimization framework, while incrementally reconstructing a 3D Gaussian map using camera and LiDAR data, all in real time. The resulting map enables high-quality, real-time novel view rendering of both RGB images and depth maps. To effectively address under-reconstruction in regions not covered by the LiDAR, we employ a lightweight zero-shot depth model that synergistically combines RGB appearance cues with sparse LiDAR measurements to generate dense depth maps. The depth completion enables reliable Gaussian initialization in LiDAR-blind areas, significantly improving system applicability for sparse LiDAR sensors. To enhance geometric accuracy, we use sparse but precise LiDAR depths to supervise Gaussian map optimization and accelerate it with carefully designed CUDA-accelerated strategies. Furthermore, we explore how the incrementally reconstructed Gaussian map can improve the robustness of odometry. By tightly incorporating photometric constraints from the Gaussian map into the continuous-time factor graph optimization, we demonstrate improved pose estimation under LiDAR degradation scenarios. We also showcase downstream applications via extending our elaborate system, including video frame interpolation and fast 3D mesh extraction. To support rigorous evaluation, we construct a dedicated LiDAR-Inertial-Camera dataset featuring ground-truth poses, depth maps, and extrapolated trajectories for assessing out-of-sequence novel view synthesis. Extensive experiments on both public and self-collected datasets demonstrate the superiority and versatility of our system across LiDAR sensors with varying sampling densities. Both the dataset and code will be made publicly available.
Two main modules: a continuous-time tightly-coupled LiDAR-Inertial-Camera Odometry and an incremental photo-realistic mapping back-end with 3DGS. First, we design a tightly-coupled LiDAR-Inertial-Camera odometry system as the front-end which supports two optional camera factors tightly fused within a continuous-time factor graph, including constraints from the Gaussian map. Second, we utilize an efficient but generalizable depth model to fully initialize Gaussians and prepare mapping data for the back-end. Third, we perform photo-realistic mapping with depth regularization and CUDA-related acceleration.
Qualitative results of RGB and depth rendering (out-of-sequence novel view) on our self-collected dataset. (a-d) The green path represents the trajectory for collecting training views, the red path shows the out-of-sequence trajectory for evaluation, and the yellow stars indicate the selected out-of-sequence novel views. The sky regions in rendered depth maps are masked in black.
Application1 - Video Frame Interpolation: The images in the middle are the interpolated frames at the intermediate timestamps of the left and right images, benefiting from the spatiotemporal capabilities of the continuous-time trajectory and 3DGS.
Application2 - Rapid 3D Mesh Extraction: Normal-colorized mesh generated from the Gaussian map reconstructed in real time.
@article{lang2025gaussian2,
title={Gaussian-LIC2: LiDAR-Inertial-Camera Gaussian Splatting SLAM},
author={Lang, Xiaolei and Lv, Jiajun and Tang, Kai and Li, Laijian and Huang, Jianxin and Liu, Lina and Liu, Yong and Zuo, Xingxing},
journal={arXiv},
year={2025}
}