This repository provides the official implementations of 3D Gaussian Ray Tracing (3DGRT) and 3D Gaussian Unscented Transform (3DGUT). Unlike traditional methods that rely on splatting, 3DGRT performs ray tracing of volumetric Gaussian particles instead. This enables support for distorted cameras with complex, time-dependent effects such as rolling shutters, while also efficiently simulating secondary rays required for rendering phenomena like reflection, refraction, and shadows. However, 3DGRT requires dedicated ray-tracing hardware and remains slower than 3DGS.
To mitigate this limitation, we also propose 3DGUT, which enables support for distorted cameras with complex, time-dependent effects within a rasterization framework, maintaining the efficiency of rasterization methods. By aligning the rendering formulations of 3DGRT and 3DGUT, we introduce a hybrid approach called 3DGRUT. This technique allows for rendering primary rays via rasterization and secondary rays via ray tracing, combining the strengths of both methods for improved performance and flexibility.
For projects that require a fast, modular, and production-ready Gaussian Splatting framework, we recommend using gsplat, which also provides support for 3DGUT.
3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes Nicolas Moenne-Loccoz*, Ashkan Mirzaei*, Or Perel, Riccardo De Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp^, Zan Gojcic^ (*,^ indicates equal contribution) SIGGRAPH Asia 2024 (Journal Track) Project pageΒ / PaperΒ / VideoΒ / BibTeX
3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Qi Wu*, Janick Martinez Esturo*, Ashkan Mirzaei, Nicolas Moenne-Loccoz, Zan Gojcic (* indicates equal contribution) CVPR 2025 (Oral) Project pageΒ / PaperΒ / VideoΒ / BibTeX
- β [2026/03] NCore v4: Support for training from NCore v4 datasets (NCore, commands).
- β [2026/01] Physically-Plausible ISP support.
- β [2025/08] Support for the 3DGRT and 3DGS/3DGRT pipelines is now available with the Vulkan API as part of the Vulkan Gaussian Splatting Project. 3DGUT will also be available soon.
- β [2025/07] Support for datasets with multiple sensors (only for COLMAP-style datasets).
- β [2025/07] Support for Windows has been added.
- β [2025/06] Playground supports PBR meshes and environment maps.
- β [2025/04] Support for image masks.
- β [2025/04] SparseAdam support.
- β [2025/04] MCMC densification strategy support.
- β [2025/04] Stable release v1.0.0 tagged.
- β [2025/03] Initial code release!
- β [2025/02] 3DGUT was accepted to CVPR 2025!
- β [2024/08] 3DGRT was accepted to SIGGRAPH Asia 2024!
- π₯ News
- Contents
- π§ 1 Dependencies and Installation
- π» 2. Train 3DGRT or 3DGUT scenes
- π₯ 3. Rendering from Checkpoints
- π 4. Evaluations
- π 5. Interactive Playground GUI
- π 6. Contributing
- π 7. Citations
- π 8. Acknowledgements
- Supported CUDA versions: 11.8, 12.4, 12.6, 12.8 (default), 13.0 (experimental)
- For good performance with 3DGRT, we recommend using an NVIDIA GPU with Ray Tracing (RT) cores.
- Both Linux and Windows are supported via UV install scripts.
(Kindly contributed by @MasahiroOgawa)
UV provides faster installation and better dependency resolution.
git clone --recursive https://github.com/nv-tlabs/3dgrut.git
cd 3dgrutLinux
The install scripts automatically find or install a GCC version compatible with your chosen CUDA toolkit.
Prerequisites:
- uv installed:
curl -LsSf https://astral.sh/uv/install.sh | sh - A CUDA toolkit β choose one of the sub-options below.
- OpenGL headers for playground:
sudo apt-get install libgl1-mesa-dev
Sub-option A1 β System CUDA (use an existing nvcc in PATH or CUDA_HOME):
./install_env_uv.sh # venv name defaults to "3dgrut"
source .venv/bin/activateSub-option A2 β conda-managed CUDA (let conda install the CUDA toolkit):
# Step 1: create a conda environment with the CUDA toolkit
CUDA_VERSION=12 ./scripts/create_conda.sh 3dgrut
conda activate 3dgrut
# Step 2: install Python dependencies
./install_env_uv.sh # or conda run -n 3dgrut ./install_env_uv.shSub-option A3 β Local venv CUDA (download CUDA into .venv/, no system-wide install needed):
# Supported CUDA_VERSION values: 11.8 (or 11), 12.4, 12.6, 12.8 (or 12), 13.0 (or 13)
CUDA_VERSION=12 ./scripts/create_venv_cuda.sh # ~4 GB download on first run
./install_env_uv.sh
source .venv/bin/activate[!NOTE] Requires wget:
sudo apt-get install wgetThe CUDA toolkit runfile (~4GB) is cached at/tmp/cuda_<version>_linux.runand reused on subsequent runs. The downloaded CUDA toolkit is installed locally to.venv/cuda-{version}/. You can force a local install even with system CUDA available by settingFORCE_LOCAL_CUDA=1in the environment variables.
Windows
Prerequisites:
- uv installed:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" - CUDA Toolkit installed from NVIDIA CUDA Downloads
- Visual Studio Build Tools (2019 or later) with the Desktop development with C++ workload.
The script auto-detects
cl.exe,cmake, andninjafrom the VS installation. If both a CUDA-compatible VS (2019β2022) and a newer one are installed, the script prefers the compatible version. For VS 2025+ (not yet officially supported by CUDA), the script automatically adds--allow-unsupported-compilerto nvcc.
From a PowerShell terminal in the project root:
.\install_env_uv.ps1 # auto-detects CUDA, venv name defaults to "3dgrut"To override CUDA_HOME:
$env:CUDA_HOME = "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8"
.\install_env_uv.ps1After installation, activate the virtual environment (required for every new terminal session):
.venv\Scripts\Activate.ps1This also sets the build environment variables (TORCH_CUDA_ARCH_LIST, CUDA_HOME, VS Build Tools paths, etc.) that were persisted during installation.
Legacy Conda Installation via install_env.sh (CUDA 11.8.0 / 12.8.1 only)
[!NOTE]
install_env.shis a legacy script that only supports CUDA 11.8.0 and 12.8.1 and requires manual GCC management. For new setups, prefer Sub-option A1 above, which supports more CUDA versions and handles GCC compatibility automatically.
git clone --recursive https://github.com/nv-tlabs/3dgrut.git
cd 3dgrut
chmod +x install_env.sh
./install_env.sh 3dgrut
conda activate 3dgrutIf your system GCC is newer than 11, install gcc-11 first and pass the WITH_GCC11 flag:
sudo apt-get install gcc-11 g++-11
./install_env.sh 3dgrut WITH_GCC11We support CUDA 12.8 (Blackwell / RTX 50 series) β kindly contributed by @johnnynunez:
Using the legacy script:
CUDA_VERSION=12.8.1 ./install_env.sh 3dgrut_cuda12 WITH_GCC11Or using the UV script:
CUDA_VERSION=12 ./scripts/create_venv_cuda.sh 3dgrut_cuda12
./install_env_uv.sh
source .venv/bin/activateTo build the Docker image:
docker build --build-arg CUDA_VERSION=12.8.1 -t 3dgrut:cuda128 .Build the Docker image:
git clone --recursive https://github.com/nv-tlabs/3dgrut.git
cd 3dgrut
docker build . -t 3dgrutRun it:
xhost +local:root
docker run -v --rm -it --gpus=all --net=host --ipc=host -v $PWD:/workspace --runtime=nvidia -e DISPLAY 3dgrutNote
Remember to set the DISPLAY environment variable if you are running on a remote server from the command line.
We provide different configurations for training using 3DGRT and 3DGUT models on common benchmark datasets. For example, you can download the NeRF Synthetic dataset, the MipNeRF360 dataset, or ScanNet++, and then run one of the following commands:
# Train Lego with 3DGRT & 3DGUT
python train.py --config-name apps/nerf_synthetic_3dgrt.yaml path=data/nerf_synthetic/lego out_dir=runs experiment_name=lego_3dgrt
python train.py --config-name apps/nerf_synthetic_3dgut.yaml path=data/nerf_synthetic/lego out_dir=runs experiment_name=lego_3dgut
# Train Bonsai
python train.py --config-name apps/colmap_3dgrt.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgrt dataset.downsample_factor=2
python train.py --config-name apps/colmap_3dgut.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgut dataset.downsample_factor=2
# Train Scannet++
python train.py --config-name apps/scannetpp_3dgrt.yaml path=data/scannetpp/0a5c013435/dslr out_dir=runs experiment_name=0a5c013435_3dgrt
python train.py --config-name apps/scannetpp_3dgut.yaml path=data/scannetpp/0a5c013435/dslr out_dir=runs experiment_name=0a5c013435_3dgutSet path to your NCore v4 sequence JSON. Data layout and tooling are described in the open-source NCore repository. Training defaults are in configs/dataset/ncore.yaml.
python train.py --config-name apps/ncore_3dgut.yaml path=<path>/<sequence-meta>.json out_dir=runs experiment_name=ncore_3dgut
python train.py --config-name apps/ncore_3dgut_mcmc.yaml path=<path>/<sequence-meta>.json out_dir=runs experiment_name=ncore_3dgut_mcmc
python train.py --config-name apps/ncore_3dgrt.yaml path=<path>/<sequence-meta>.json out_dir=runs experiment_name=ncore_3dgrt
python train.py --config-name apps/ncore_3dgrt_mcmc.yaml path=<path>/<sequence-meta>.json out_dir=runs experiment_name=ncore_3dgrt_mcmc
# Example overrides: dataset.downsample=0.5 num_workers=8We also support the MCMC densification strategy and the selective Adam optimizer for 3DGRT and 3DGUT.
To enable MCMC, use:
python train.py --config-name apps/colmap_3dgrt_mcmc.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgrt dataset.downsample_factor=2
python train.py --config-name apps/colmap_3dgut_mcmc.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgut dataset.downsample_factor=2To enable selective Adam, use:
python train.py --config-name apps/colmap_3dgrt.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgrt dataset.downsample_factor=2 optimizer.type=selective_adam
python train.py --config-name apps/colmap_3dgut.yaml path=data/mipnerf360/bonsai out_dir=runs experiment_name=bonsai_3dgut dataset.downsample_factor=2 optimizer.type=selective_adamIf you use MCMC and Selective Adam in your research, please cite 3dgs-mcmc, taming-3dgs, and the gSplat library from which the code was adopted (links to the code are provided in the source files).
Note
For ScanNet++, we expect the dataset to be preprocessed using the method described in FisheyeGS.
Note
If you're running from the PyCharm IDE, enable the rich console as follows: Run Configuration > Modify Options > Emulate terminal in output console*
In order to use image masks, you need to provide a mask for each image in the dataset. The mask is a grayscale image (0s and 1s) that masks out the parts of the image that should not be used during training, i.e. all the pixels with value 0 will be ignored in the loss computation.
The provided masks should have the same resolution as their corresponding images and be stored in the same folder with the same name but with _mask.png extension. For example, to mask out the parts of the image path-to-image/image.jpeg, the mask should be stored at path-to-image/image_mask.png.
NOTE: The masks are only used for loss computation and not for computing the metrics.
As a beta feature, Omniverse Kit 107.3 and Isaac Sim 5.0 are able to support rendering 3D Gaussians in a specific custom USDZ-based format that uses an extension of the UsdVolVolume Schema.
The 3DGRUT repository can output trained scenes to this format by enabling the export_usd flag:
python train.py --config-name apps/colmap_3dgut.yaml path=data/mipnerf360/garden/ out_dir=runs experiment_name=garden_3dgut dataset.downsample_factor=2 export_usd.enabled=trueNote
The USD output schema is currently compatible with Isaac Sim 5.0, but how USD and reconstruction workflows work together is highly likely to change in future versions. This is a beta feature.
If you have existing Gaussian data in PLY format, for example, from 3DGS, you can convert it to the USDZ format using the ply_to_usd.py script:
python -m threedgrut.export.scripts.ply_to_usd path/to/your/model.ply --output_file path/to/output.usdzThis is useful for converting 3DGS models from other sources to the USDZ format. Note that the resulting USDZ does not include a mesh. If you need a mesh inside the USDZ (e.g. for collision geometry), follow the next step.
You can add a mesh (PLY or USD) into an existing USDZ file using the add_mesh_to_usdz.py script. This is useful for producing USDZ assets with physics properties such as collision geometry.
python -m threedgrut.export.scripts.add_mesh_to_usdz --input_usdz path/to/input.usdz --output_usdz path/to/output.usdz --mesh_ply path/to/mesh.ply --set_collisionOptional flags:
--set_collisionβ enable collision on mesh prims.--set_invisibleβ make mesh prims invisible.--referencing_usdβ specify which USD file in the package to modify (default: auto-detect the one with a Volume prim).
Evaluate a checkpoint with splatting, the OptiX tracer, or PyTorch:
python render.py --checkpoint runs/lego/ckpt_last.pt --out-dir outputs/evalpython train.py --config-name apps/nerf_synthetic_3dgut.yaml path=data/nerf_synthetic/lego with_gui=TrueNote
Remember to set the DISPLAY environment variable if you are running on a remote server from the command line.
Alternatively, use the viser GUI contributed by the community (@tangkangqi):
python train.py --config-name apps/nerf_synthetic_3dgut.yaml path=data/nerf_synthetic/lego with_viser_gui=TrueNote
Remember to install viser first via pip install viser and forward the port 8080 to your local machine if you are running on a remote server.
python train.py --config-name apps/nerf_synthetic_3dgut.yaml path=data/nerf_synthetic/lego with_gui=True test_last=False export_ingp.enabled=False resume=runs/lego/ckpt_last.ptOn startup, you might see a black screen, but you can use the GUI to navigate to the correct camera views:

Similarly, you can use the viser GUI by setting with_viser_gui=True instead of with_gui=True.
We provide scripts to reproduce results reported in publications.
# Training
bash ./benchmark/mipnerf360_3dgut.sh <config-yaml>
# Rendering
bash ./benchmark/mipnerf360_3dgut_render.sh <results-folder>3DGRT Results Produced on RTX 5090
NeRF Synthetic Dataset
bash ./benchmark/nerf_synthetic.sh apps/nerf_synthetic_3dgrt.yaml
bash ./benchmark/nerf_synthetic_render.sh results/nerf_synthetic| PSNR | SSIM | Train (s) | FPS | |
|---|---|---|---|---|
| Chair | 35.85 | 0.988 | 556.4 | 299 |
| Drums | 25.87 | 0.953 | 462.8 | 389 |
| Ficus | 36.57 | 0.989 | 331.0 | 465 |
| Hotdog | 37.88 | 0.986 | 597.0 | 270 |
| Lego | 36.70 | 0.985 | 469.8 | 360 |
| Materials | 30.42 | 0.962 | 463.3 | 347 |
| Mic | 35.90 | 0.992 | 443.4 | 291 |
| Ship | 31.73 | 0.909 | 510.7 | 360 |
| Average | 33.87 | 0.971 | 479.3 | 347 |
MipNeRF360 Dataset
bash ./benchmark/mipnerf360.sh apps/colmap_3dgrt.yaml
bash ./benchmark/mipnerf360_render.sh results/mipnerf360| PSNR | SSIM | Train (s) | FPS | |
|---|---|---|---|---|
| Bicycle | 24.85 | 0.748 | 2335 | 66 |
| Bonsai | 31.95 | 0.942 | 3383 | 72 |
| Counter | 28.47 | 0.905 | 3247 | 62 |
| Flowers | 21.42 | 0.615 | 2090 | 86 |
| Garden | 26.97 | 0.852 | 2253 | 70 |
| Kitchen | 30.13 | 0.921 | 4837 | 39 |
| Room | 30.35 | 0.911 | 2734 | 73 |
| Stump | 26.37 | 0.770 | 1995 | 73 |
| Treehill | 22.08 | 0.622 | 2413 | 68 |
| Average | 27.22 | 0.817 | 2869 | 68 |
3DGUT Results Produced on RTX 5090
NeRF Synthetic Dataset
bash ./benchmark/nerf_synthetic.sh paper/3dgut/unsorted_nerf_synthetic.yaml
bash ./benchmark/nerf_synthetic_render.sh results/nerf_synthetic| PSNR | SSIM | Train (s) | FPS | |
|---|---|---|---|---|
| Chair | 35.61 | 0.988 | 265.6 | 599 |
| Drums | 25.99 | 0.953 | 254.1 | 694 |
| Ficus | 36.43 | 0.988 | 183.5 | 1053 |
| Hotdog | 38.11 | 0.986 | 184.8 | 952 |
| Lego | 36.47 | 0.984 | 221.7 | 826 |
| Materials | 30.39 | 0.960 | 194.3 | 1000 |
| Mic | 36.32 | 0.992 | 204.7 | 775 |
| Ship | 31.72 | 0.908 | 208.5 | 870 |
| Average | 33.88 | 0.970 | 214.6 | 846 |
MipNeRF360 Dataset
GS Strategy, Unsorted
bash ./benchmark/mipnerf360.sh paper/3dgut/unsorted_colmap.yaml
bash ./benchmark/mipnerf360_render.sh results/mipnerf360| PSNR | SSIM | Train (s) | FPS | |
|---|---|---|---|---|
| Bicycle | 25.01 | 0.759 | 949.8 | 275 |
| Bonsai | 32.46 | 0.945 | 485.3 | 362 |
| Counter | 29.14 | 0.911 | 484.5 | 380 |
| Flowers | 21.45 | 0.612 | 782.0 | 253 |
| Garden | 27.18 | 0.856 | 810.2 | 316 |
| Kitchen | 31.16 | 0.928 | 664.8 | 275 |
| Room | 31.63 | 0.920 | 448.8 | 370 |
| Stump | 26.50 | 0.773 | 742.6 | 319 |
| Treehill | 22.35 | 0.627 | 809.6 | 299 |
| Average | 27.43 | 0.815 | 686.4 | 317 |
MCMC Strategy, Unsorted
bash ./benchmark/mipnerf360.sh paper/3dgut/unsorted_colmap_mcmc.yaml
bash ./benchmark/mipnerf360_render.sh results/mipnerf360| PSNR | SSIM | Train (s) | FPS | |
|---|---|---|---|---|
| Bicycle | 25.31 | 0.765 | 502.3 | 361 |
| Bonsai | 32.51 | 0.947 | 670.6 | 274 |
| Counter | 29.40 | 0.916 | 752.7 | 254 |
| Flowers | 21.86 | 0.616 | 553.3 | 298 |
| Garden | 27.06 | 0.852 | 512.7 | 360 |
| Kitchen | 31.71 | 0.930 | 739.6 | 258 |
| Room | 32.04 | 0.928 | 643.7 | 313 |
| Stump | 27.06 | 0.795 | 487.0 | 339 |
| Treehill | 23.11 | 0.650 | 508.6 | 365 |
| Average | 27.78 | 0.822 | 596.7 | 308 |
GS Strategy, Unsorted, Sparse Adam
| PSNR | SSIM | Train (s) | FPS | |
|---|---|---|---|---|
| Bicycle | 25.04 | 0.759 | 835.2 | - |
| Bonsai | 32.63 | 0.945 | 457.1 | - |
| Counter | 29.12 | 0.911 | 468.8 | - |
| Flowers | 21.55 | 0.614 | 741.7 | - |
| Garden | 27.12 | 0.855 | 757.4 | - |
| Kitchen | 31.37 | 0.929 | 639.3 | - |
| Room | 31.72 | 0.921 | 415.2 | - |
| Stump | 26.58 | 0.774 | 695.7 | - |
| Treehill | 22.30 | 0.625 | 749.8 | - |
| Average | 27.49 | 0.815 | 640.0 | - |
Scannet++ Dataset
bash ./benchmark/scannetpp.sh paper/3dgut/unsorted_scannetpp.yaml
bash ./benchmark/scannetpp_render.sh results/scannetpp[!Note] We followed FisheyeGS's convention to prepare the dataset for fair comparisons.
| PSNR | SSIM | Train (s) | FPS | |
|---|---|---|---|---|
| 0a5c013435 | 29.67 | 0.930 | 292.3 | 389 |
| 8d563fc2cc | 26.88 | 0.912 | 286.1 | 439 |
| bb87c292ad | 31.58 | 0.941 | 316.9 | 448 |
| d415cc449b | 28.12 | 0.871 | 394.6 | 483 |
| e8ea9b4da8 | 33.47 | 0.954 | 280.8 | 394 |
| fe1733741f | 25.60 | 0.858 | 355.8 | 450 |
| Average | 29.22 | 0.911 | 321.1 | 434 |
The playground allows interactive exploration of pretrained scenes, with ray-tracing effects such as inserted objects, reflections, refractions, depth of field, and more.
Run the playground UI to visualize a pretrained scene with:
python playground.py --gs_object <ckpt_path>
See Playground README for details.
Update (2025/04): The playground engine is now exposed, and remote rendering is supported; see README for details.
Contributions are welcome! Please feel free to submit a pull request.
Formatting uses black and isort. Please run
black . --target-version=py311 --line-length=120 --exclude=thirdparty/tiny-cuda-nn and
isort . --skip=thirdparty/tiny-cuda-nn --profile=black before submitting a pull request.
@article{loccoz20243dgrt,
author = {Nicolas Moenne-Loccoz and Ashkan Mirzaei and Or Perel and Riccardo de Lutio and Janick Martinez Esturo and Gavriel State and Sanja Fidler and Nicholas Sharp and Zan Gojcic},
title = {3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes},
journal = {ACM Transactions on Graphics and SIGGRAPH Asia},
year = {2024},
}
@article{wu20253dgut,
title={3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting},
author={Wu, Qi and Martinez Esturo, Janick and Mirzaei, Ashkan and Moenne-Loccoz, Nicolas and Gojcic, Zan},
journal = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}
We sincerely thank our colleagues for their valuable contributions to this project.
Hassan Abu Alhaija, Ronnie Sharif, Beau Perschall and Lars Fabiunke for assistance with assets. Greg Muthler, Magnus Andersson, Maksim Eisenstein, Tanki Zhang, Nathan Morrical, Dietger van Antwerpen and John Burgess for performance feedback. Thomas MΓΌller, Merlin Nimier-David, and Carsten Kolve for inspiration and pointers. Ziyu Chen, Clement Fuji-Tsang, Masha Shugrina, and George Kopanas for technical & experiment assistance, and to Ramana Kiran and Shailesh Mishra for typo fixes.

