Girish Chandar Ganesan1 . Yuliang Guo2 · Liu Ren2 . Xiaoming Liu1,3
1Michigan State University 2Bosch Research North America 3University of North Carolina at Chapel Hill
- Training code for UniDAC.
- Demo code for images with unknown camera parameters.
- Demo code for easy setup and usage.
-
2026-03-13: Release of UniDAC checkpoint trained on moderately sized datasets. -
2026-03-13: Testing and evaluation pipeline for zero-shot metric depth estimation on perspective, fisheye, and 360-degree datasets. -
2026-03-13: Data preparation and curation scripts. -
2026-02-20: UniDAC accepted by CVPR 2026!
UniDAC outperforms all prior metric depth estimation methods trained with perspective images on both indoor and outdoor datasets and sets the SoTA in zero-shot cross-camera generalization and universal domain robustness. UniDAC outperforms UniK3D, even though the latter has been trained on large FoV images and has a much larger training set, demonstrating the robustness of UniDAC. Matterport3D is present in the training set of UniK3D and thus we omit its results.
| Methods | Dataset Size |
ScanNet++ | Pano3D-GV2 | KITTI-360 | Matterport3D | ||||
|---|---|---|---|---|---|---|---|---|---|
| δ₁ ↑ | Abs.Rel ↓ | δ₁ ↑ | Abs.Rel ↓ | δ₁ ↑ | Abs.Rel ↓ | δ₁ ↑ | Abs.Rel ↓ | ||
| UniK3D | 8M | 0.651 | 0.253 | 0.785 | 0.170 | 0.817 | 0.244 | - | - |
| Metric3Dv2 | 16M | 0.536 | 0.223 | 0.404 | 0.307 | 0.716 | 0.200 | 0.438 | 0.292 |
| UniDepth | 3M | 0.364 | 0.497 | 0.247 | 0.789 | 0.481 | 0.294 | 0.258 | 0.765 |
| DACU | 0.8M | 0.658 | 0.233 | 0.684 | 0.203 | 0.708 | 0.186 | 0.662 | 0.215 |
| UniDAC | 1.45M | 0.918 | 0.097 | 0.768 | 0.161 | 0.836 | 0.141 | 0.745 | 0.175 |
git clone https://github.com/girish1511/UniDAC
cd UniDACAlternatively, this repository can be run from within Conda alone.
conda create -n unidac python=3.10.18 -y
conda activate unidac
pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
export PYTHONPATH="$PWD:$PYTHONPATH"The training set consist of 4 outdoor datasets and 3 indoor datasets. The testing set consists of two 360 datasets, two fisheye datasets and 4 perspective datasets.
Please refer to DATA.md for detailed datasets preparation.
We provide a simple ready-to-run demo script in the demo folder along with the required sample inputs in demo/input.
demo/demo_unidac.py demonstrates the inference pipeline for diverse camera types and scenes, including ScanNet++(Indoor, Fisheye), Matterport3D(Indoor, 360) and KITTI360(Outdoor, Fisheye), using a unified model trained only on perspective images.
Download the checkpoint from and place in
checkpoints/.
You can then run the demo script by running the following command and the visualizations will be stored in demo/output:
bash demo.sh
Download the checkpoint from and place in
checkpoints/.
Run the following to evaluate and reproduce the results presented in the paper:
bash eval.sh <domain> <dataset>Different config files for evaluating the reported testing datasets are included in configs/test. Refer to the table below to set the <domain> and <dataset> arguments, which together select the corresponding configuration file for the dataset you wish to evaluate.
| ScanNet++ | Matterport3D | Pano3D-GibsonV2 | KITTI-360 | KITTI | NYU | NuScenes | iBims-1 | |
|---|---|---|---|---|---|---|---|---|
<domain> |
indoor |
indoor |
indoor |
outdoor |
outdoor |
indoor |
outdoor |
indoor |
<dataset> |
scannetpp |
gv2 |
scannetpp |
kitti360 |
kitti |
nyu |
nuscenes |
ibims |
We thank the authors of the following awesome codebases:
This software is released under MIT license. You can view a license summary here.

