Skip to content

Tencent-Hunyuan/HY-World-2.0

Repository files navigation

HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds

English | 简体中文

HY-World-2.0 Teaser


"What Is Now Proved Was Once Only Imagined"

🎥 Video

hyworld2_en.mp4

🔥 News

  • [April 16, 2026]: 🚀 Release HY-World 2.0 technical report & partial codes!
  • [April 16, 2026]: 🤗 Open-source WorldMirror 2.0 inference code and model weights!
  • [Coming Soon]: Release Full HY-World 2.0 (World Generation) inference code.
  • [Coming Soon]: Release Panorama Generation (HY-Pano 2.0) model weights & code.
  • [Coming Soon]: Release Trajectory Planning(WorldNav) code.
  • [Coming Soon]: Release World Expansion(WorldStereo 2.0) model weights & inference code.

📋 Table of Contents

📖 Introduction

HY-World 2.0 is a multi-modal world model framework for world generation and world reconstruction. It accepts diverse input modalities — text, single-view images, multi-view images, and videos — and produces 3D world representations (meshes / Gaussian Splattings). It offers two core capabilities:

  • World Generation (text / single image → 3D world): syntheses high-fidelity, navigable 3D scenes through a four-stage method —— a) Panorama Generation with HY-Pano 2.0, b) Trajectory Planning with WorldNav, c) World Expansion with WorldStereo 2.0, and d) World Composition with WorldMirror 2.0 & 3DGS learning.
  • World Reconstruction (multi-view images / video → 3D): Powered by WorldMirror 2.0, a unified feed-forward model that simultaneously predicts depth, surface normals, camera parameters, 3D point clouds, and 3DGS attributes in a single forward pass.

HY-World 2.0 is an open-source state-of-the-art world model. We will release all model weights, code, and technical details to facilitate reproducibility and advance research in this field.

Why 3D World Models?

Existing world models, such as Genie 3, Cosmos, and HY-World 1.5 (WorldPlay+WorldCompass), generate pixel-level videos — essentially "watching a movie" that vanishes once playback ends. HY-World 2.0 takes a fundamentally different approach: it directly produces editable, persistent 3D assets (meshes / 3DGS) that can be imported into game engines like Blender/Unity/Unreal Engine/Isaac Sim — more like "building a playable game" than recording a clip. This paradigm shift natively resolves many long-standing pain points of video world models:

Video World Models 3D World Model (HY-World 2.0)
Output Pixel videos (non-editable) Real 3D assets — meshes / 3DGS (fully editable)
Playable Duration Limited (typically 1 min) Unlimited — assets persist permanently
3D Consistency No (flickering, artifacts across views) Native — inherently consistent in 3D
Real-Time Rendering Requires per-frame inference; high latency Consumer GPUs can render in real time
Controllability Weak (imprecise character control, no real physics) Precise — zero-error control, real physics collision, accurate lighting
Inference Cost Accumulates with every interaction One-time generation; rendering cost ≈ 0
Engine Compatibility ✗ Video files only ✓ Directly importable into Blender / UE / Isaac Engine
$\color{IndianRed}{\textsf{Watch a video, then it's gone}}$ $\color{RoyalBlue}{\textbf{Build a world, keep it forever}}$

All above are real 3D assets (not generated videos) and entirely created by HY-World 2.0 -- captured from live real-time interaction.

✨ Highlights

  • Real 3D Worlds, Not Just Videos

    Unlike video-only world models (e.g., Genie 3, HY World 1.5), HY-World 2.0 generates real 3D assets — 3DGS, meshes, and point clouds — that are freely explorable, editable, and directly importable into Unity / Unreal Engine / Isaac. From a single text prompt or image, create navigable 3D worlds with diverse styles: realistic, cartoon, game, and more.

  • Instant 3D Reconstruction from Photos & Videos

    Powered by WorldMirror 2.0, a unified feed-forward model that predicts dense point clouds, depth maps, surface normals, camera parameters, and 3DGS from multi-view images or casual videos in a single forward pass. Supports flexible-resolution inference (50K–500K pixels) with SOTA accuracy. Capture a video, get a digital twin.

  • Interactive Character Exploration

    Go beyond viewing — play inside your generated worlds. HY-World 2.0 supports first-person navigation and third-person character mode, enabling users to freely explore AI-generated streets, buildings, and landscapes with physics-based collision. Go to our product page for free try (Very Crowded Now).

🧩 Architecture

  • Refer to our tech report for more details

    A systematic pipeline of HY-World 2.0 — Panorama Generation (HY-Pano-2.0) → Trajectory Planning (WorldNav) → World Expansion (WorldStereo 2.0) → World Composition (WorldMirror 2.0 + Splattings Learning) — that automatically transforms text or a single image into a high-fidelity, navigable 3D world (3DGS/mesh outputs).

📝 Open-Source Plan

  • ✅ Technical Report
  • ✅ WorldMirror 2.0 Code & Model Checkpoints
  • ⬜ Full Inference Code for World Generation (WorldNav + World Composition)
  • ⬜ Panorama Generation (HY-Pano 2.0) Model & Code — HunyuanWorld 1.0 available as interim alternative
  • ⬜ World Expansion (WorldStereo 2.0) Model & Code — WorldStereo available as interim alternative

🎁 Model Zoo

World Reconstruction — WorldMirror Series

Model Description Params Date Hugging Face
WorldMirror-2 [new] Multi-view / video → 3D reconstruction ~1.2B 2026.4 Download
WorldMirror-1 Multi-view / video → 3D reconstruction (legacy) ~1.2B 2025.10 Download

Panorama Generation — HY-Pano Series

Model Description Params Date Hugging Face
HY-Pano-2 [new] Text / image → 360° panorama Coming Soon

World Expansion — WorldStereo Series

Model Description Params Date Hugging Face
WorldStereo-2 [new] Panorama → 3DGS world Coming Soon

Spatial Planning — WorldNav Series

Algorithm Description Params Date
WorldNav [new] Panorama → Camera Traj. Coming Soon

We recommend referring to our previous works, WorldStereo and WorldMirror, for background knowledge on 3D world generation and reconstruction.

🤗 Get Started

Install Requirements

We recommend CUDA 12.4 for installation.

# 1. Clone the repository
git clone https://github.com/Tencent-Hunyuan/HY-World-2.0
cd HY-World-2.0

# 2. Create conda environment
conda create -n hyworld2 python=3.10
conda activate hyworld2

# 3. Install PyTorch (CUDA 12.4)
pip install torch==2.4.0 torchvision==0.19.0 --index-url https://download.pytorch.org/whl/cu124

# 4. Install dependencies
pip install -r requirements.txt

# 5. Install FlashAttention
# (Recommended) Install FlashAttention-3
git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention/hopper
python setup.py install
cd ../../
rm -rf flash-attention

# For simpler installation, you can also use FlashAttention-2
pip install flash-attn --no-build-isolation

Code Usage — Panorama Generation (HY-Pano-2)

Coming soon.

Code Usage — World Generation (WorldNav, WorldStereo-2, and 3DGS)

Coming soon.

We recommend referring to our previous work, WorldStereo, for the open-source preview version of WorldStereo-2.

Code Usage — WorldMirror 2.0

WorldMirror 2.0 supports the following usage modes:

We provide a diffusers-like Python API for WorldMirror 2.0. Model weights are automatically downloaded from Hugging Face on first run.

from hyworld2.worldrecon.pipeline import WorldMirrorPipeline

pipeline = WorldMirrorPipeline.from_pretrained('tencent/HY-World-2.0')
result = pipeline('path/to/images')

With Prior Injection (Camera & Depth):

result = pipeline(
    'path/to/images',
    prior_cam_path='path/to/prior_camera.json',
    prior_depth_path='path/to/prior_depth/',
)

For the detailed structure of camera/depth priors and how to prepare them, see Prior Preparation Guide.

CLI:

# Single GPU
python -m hyworld2.worldrecon.pipeline --input_path path/to/images

# Multi-GPU
torchrun --nproc_per_node=2 -m hyworld2.worldrecon.pipeline \
    --input_path path/to/images \
    --use_fsdp --enable_bf16

Important: In multi-GPU mode, the number of input images must be >= the number of GPUs. For example, with --nproc_per_node=8, provide at least 8 images.

Gradio App — WorldMirror 2.0

We provide an interactive Gradio web demo for WorldMirror 2.0. Upload images or videos and visualize 3DGS, point clouds, depth maps, normal maps, and camera parameters in your browser.

# Single GPU
python -m hyworld2.worldrecon.gradio_app

# Multi-GPU
torchrun --nproc_per_node=2 -m hyworld2.worldrecon.gradio_app \
    --use_fsdp --enable_bf16

For the full list of Gradio app arguments (port, share, local checkpoints, etc.), see DOCUMENTATION.md.

🔮 Performance

For full benchmark results, please refer to the technical report.

WorldStereo 2.0 — Camera Control

Methods Camera Metrics Visual Quality
RotErr ↓TransErr ↓ATE ↓ Q-Align ↑CLIP-IQA+ ↑Laion-Aes ↑CLIP-I ↑
SEVA1.6901.5782.8793.2320.4794.62377.16
Gen3C0.9441.5802.7893.3530.4894.86382.33
WorldStereo0.7621.2452.1414.1490.5475.25789.05
WorldStereo 2.00.4920.9681.7684.2050.5445.26689.43

WorldStereo 2.0 — Single-View-Generated Reconstruction

Methods Tanks-and-Temples MipNeRF360
Precision ↑ Recall ↑ F1-Score ↑ AUC ↑ Precision ↑ Recall ↑ F1-Score ↑ AUC ↑
SEVA 33.59 35.34 36.73 51.03 22.38 55.63 28.75 46.81
Gen3C 46.73 25.51 31.24 42.44 23.28 75.37 35.26 52.10
Lyra 50.38 28.67 32.54 43.05 30.02 58.60 36.05 49.89
FlashWorld 26.58 20.72 22.29 30.45 35.97 53.77 42.60 53.86
WorldStereo 2.0 43.62 41.02 41.43 58.19 43.19 65.32 51.27 65.79
WorldStereo 2.0 (DMD) 40.41 44.41 43.16 60.09 42.34 64.83 50.52 65.64

WorldMirror 2.0 — Point Map Reconstruction

Point Map Reconstruction on 7-Scenes, NRGBD, and DTU. We report the mean Accuracy and Completeness of WorldMirror under different input configurations. Bold results are best. "L / M / H" denote low / medium / high inference resolution. "+ all priors" denotes injection of camera extrinsics, camera intrinsics, and depth priors.

Method 7-Scenes (scene) NRGBD (scene) DTU (object)
Acc. ↓Comp. ↓ Acc. ↓Comp. ↓ Acc. ↓Comp. ↓
WorldMirror 1.0
  L0.0430.0550.0460.0491.4761.768
  L + all priors0.0210.0260.0220.0201.3471.392
  M0.0430.0490.0410.0451.0171.780
  M + all priors0.0180.0230.0160.0140.7350.935
  H0.0790.0870.0770.0932.2712.113
  H + all priors0.0420.0410.0780.0821.7731.478
WorldMirror 2.0
  L0.0410.0520.0470.0581.3522.009
  L + all priors0.0190.0240.0170.0151.1001.201
  M0.0330.0460.0390.0471.0051.892
  M + all priors0.0130.0170.0130.0130.6900.876
  H0.0370.0400.0460.0530.8451.904
  H + all priors0.0120.0160.0150.0160.5540.771

WorldMirror 2.0 — Prior Comparison

Comparison with Pow3R and MapAnything under Different Prior Conditions. Results are averaged on 7-Scenes, NRGBD, and DTU datasets. Pow3R (pro) refers to the original Pow3R with Procrustes alignment.

🎬 More Examples

📖 Documentation

For detailed usage guides, parameter references, output format specifications, and prior injection instructions, see DOCUMENTATION.md.

📚 Citation

If you find HunyuanWorld 2.0 useful for your research, please cite:

@article{hyworld22026,
  title={HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds},
  author={Tencent HY-World Team},
  journal={arXiv preprint},
  year={2026}
}

@article{hunyuanworld2025tencent,
    title={HunyuanWorld 1.0: Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels},
    author={Team HunyuanWorld},
    year={2025},
    journal={arXiv preprint}
}

📧 Contact

Please send emails to tengfeiwang12@gmail.com for questions or feedback.

🙏 Acknowledgements

We would like to thank HunyuanWorld 1.0, WorldMirror, WorldPlay, WorldStereo, HunyuanImage for their great work.

About

HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages