Interactive 3D scene reconstruction from robotics datasets. Explore point clouds, training metrics, and novel view synthesis results.
Click and drag to orbit. Scroll to zoom. Right-click to pan. Published GitHub Pages assets and local exported files are both supported. Real Gaussian Splat viewers: antimatter15/splat (WebGL) · Spark (WebGL / ESM) · shrekshao (WebGPU, compute-sort).
Click each step to learn more about the reconstruction pipeline.
Multi-view images are captured from different camera poses around a scene. For autonomous driving datasets, these come from vehicle-mounted cameras. The images should have sufficient overlap for feature matching and cover the scene from diverse viewpoints.
Training progress for 3D Gaussian Splatting optimization (30K iterations per scene).
Robotics and autonomous driving datasets used for 3D reconstruction.
Urban street scenes for autonomous driving 3D reconstruction. Multi-view images from vehicle-mounted cameras with rich annotations.
University campus environments for robot navigation. Calibrated multi-camera captures across diverse outdoor campus scenes.
Indoor room reconstruction for embodied navigation. Generalizable 3D Gaussians from pose-free in-the-wild images of interior spaces.
Sample input images for 3D Gaussian Splatting reconstruction.

CoVLA / Waymo — Urban driving scene

CoVLA / Waymo — Urban driving scene

CoVLA / Waymo — Urban driving scene

CoVLA / Waymo — Urban driving scene