NeRF-VO: Real-Time Sparse Visual Odometry With Neural Radiance Fields

IEEE Robotics and Automation Letters 2024

1Technical University of Munich, 2University of Toronto, 3California Institute of Technology

TL;DR: We propose NeRF-VO, a monocular RGB SLAM system that utilizes sparse visual odometry for pose tracking and an implicit neural representation for mapping. This enables highly accurate camera tracking, promising 3D reconstruction, and high-fidelity novel view synthesis.

Abstract

We introduce a novel monocular visual odometry (VO) system, NeRF-VO, that integrates learning-based sparse visual odometry for low-latency camera tracking and a neural radiance scene representation for fine-detailed dense reconstruction and novel view synthesis.

Our system initializes camera poses using sparse visual odometry and obtains view-dependent dense geometry priors from a monocular prediction network. We harmonize the scale of poses and dense geometry, treating them as supervisory cues to train a neural implicit scene representation. NeRF-VO demonstrates exceptional performance in both photometric and geometric fidelity of the scene representation by jointly optimizing a sliding window of keyframed poses and the underlying dense geometry, which is accomplished through training the radiance field with volume rendering.

We surpass SOTA methods in pose estimation accuracy, novel view synthesis fidelity, and dense reconstruction quality across a variety of synthetic and real-world datasets while achieving a higher camera tracking frequency and consuming less GPU memory.

Video

Method

NeRF-VO uses only a sequence of RGB images as input. The sparse visual tracking module selects keyframes from this input stream and calculates camera poses and depth values for a set of sparse patches. Additionally, the dense geometry enhancement module predicts dense depth maps and surface normals and aligns them with the sparse depth from the tracking module. The NeRF-based dense mapping module utilizes raw RGB images, inferred depth maps, surface normals, and camera poses to optimize a neural implicit representation and refine the camera poses. Our system is capable of performing high-quality 3D dense reconstruction and rendering images at novel views.

BibTeX

@article{naumann2024nerfvo,
  author={Naumann, Jens and Xu, Binbin and Leutenegger, Stefan and Zuo, Xingxing},
  journal={IEEE Robotics and Automation Letters}, 
  title={NeRF-VO: Real-Time Sparse Visual Odometry With Neural Radiance Fields}, 
  year={2024},
}