FMGS: Foundation Model Embedded 3D Gaussian

Splatting for Holistic 3D Scene Understanding

International Journal of Computer Vision (IJCV), 2024

Xingxing Zuo1, Pouya Samangouei1, Yunwen Zhou1, Yan Di1, Mingyang Li1
1Google AR

TL;DR: FMGS embeds foundation models to a 3D scene representation that seamlessly integrates 3G Gaussians and multi-resolution hash encodings (MHE). The trained scene representation supports open-vocabulary text query. Relevancy to the queries on LERF dataset is shown.

Abstract

Precisely perceiving the geometric and semantic properties of real-world 3D objects is crucial for the continued evolution of augmented reality and robotic applications. To this end, we present Foundation Model Embedded Gaussian Splatting (FMGS), which incorporates vision-language embeddings of foundation models into 3D Gaussian Splatting (GS).

The key contribution of this work is an efficient method to reconstruct and represent 3D vision-language models. This is achieved by distilling feature maps generated from image-based foundation models into those rendered from our 3D model. To ensure high-quality rendering and fast training, we introduce a novel scene representation by integrating strengths from both GS and multi-resolution hash encodings (MHE). Our effective training procedure also introduces a pixel alignment loss that makes the rendered feature distance of same semantic entities close, following the pixel-level semantic boundaries. Our results demonstrate remarkable multi-view semantic consistency, facilitating diverse downstream tasks, beating state-of-the-art methods by 10.2 percent on open-vocabulary language-based object detection, despite that we are 851X faster for inference. This research explores the intersection of vision, language, and 3D scene representation, paving the way for enhanced scene understanding in uncontrolled real-world environments. We have prepared the code for release, and it is currently undergoing the internal review process. We will make the code publicly available once we receive permission.

Relevancy to open-vocabulary queries enabled by our FMGS on LERF dataset.

Overview

FMGS training pipeline. FMGS scene respresentation can render CLIP and DINO feature maps, which are compared against the low-resolution and pixel-misaligned feature maps extracted from pretarined foundation models.

FMGS query pipeline at inference stage. Given an open-vocabulary query, FMGS generates a relevancy map highlighting the relevant part of the rendered CLIP feature map to the query embedding.

Keypoints:

Novel semantic scene representation: We introduce a novel approach combining 3D Gaussians (parameterized by mean, covariance, opacity, and spherical harmonics) for geometry and appearance representation, with MHE for efficient semantic embedding. This approach addresses memory constraints in room-scale scenes including millions of 3D Gaussians.

Multi-view consistent language embeddings: Our training process utilizes Gaussian-splatting based rendering from multiple views, ensuring consistency across 3D space in static scenarios. Language embeddings remain invariant to viewpoints, enforcing local proximity consistency within Gaussian volumes.

Addressing pixel misalignment: We address pixel alignment challenges of CLIP features by extracting and aggregating them at multiple resolutions for a hybrid CLIP feature, which is used for supervising the training. Regularization with pixel-aligned DINO features and a novel dot-product similarity loss enhances spatial precision and object differentiation.

Visualization

From our trained open-semantic scene representation, we can render RGB images, high-resolution CLIP and DINO feature maps at novel views. We show the rendered RGB, CLIP and DINO feautre maps visualized by PCA.

RGB
CLIP
DINO

Experimental Results

Object detection results. Left displays the ground-truth bounding boxes (blue), our detected highest-relevancy pixel (green) and the one detected by LERF (red). Middle showcases our relevancy score corresponding to the given text query. The text query is shown at the far left of each row. Right showcases LERF’s relevancy score corresponding to the given text query. Our computed relevancy score is more focused on the target objects linked to the query.

Semantic segmentation results. In the rows from top to bottom, we display RGB images, ground-truth (GT) segmentation masks, our refined segmentation results, our segmentation results, and the segmentation results obtained by LERF scene representation. It’s essential to note that neither our method nor LERF was initially intended for the segmentation task. Our primary aim is to evaluate the pixel accuracy of the relevance map computed from the rendered CLIP features. We can further post-process and refine our 2D segmentation results by SAM and get the results shown as ‘Ours-Refined’.

PCA visulization of rendered features. Qualitative comparison between the rendered DINO/CLIP feature maps with the scene presentation without MHE (attaching learnable feature vectors to individual Gaussians) and our scene representation with a integration of Gaussians and MHE.

BibTeX

@article{zuo2024fmgs,
  title={Fmgs: Foundation model embedded 3d gaussian splatting for holistic 3d scene understanding},
  author={Zuo, Xingxing and Samangouei, Pouya and Zhou, Yunwen and Di, Yan and Li, Mingyang},
  journal={arXiv preprint arXiv:2401.01970},
  year={2024}
}