Skip to content

Convert ONNX models into native, backend-agnostic Burn code for inference and fine-tuning.

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT
Notifications You must be signed in to change notification settings

tracel-ai/burn-onnx

Burn ONNX

Current Crates.io Version Documentation Test Status license Ask DeepWiki

Import ONNX models into the Burn deep learning framework.

Repository | Burn Repository

Overview

burn-onnx converts ONNX models to native Burn Rust code, allowing you to run models from PyTorch, TensorFlow, and other frameworks on any Burn backend - from WebAssembly to CUDA.

Key features:

  • Generates readable, modifiable Rust source code from ONNX models
  • Produces burnpack weight files for efficient loading
  • Works with any Burn backend (CPU, GPU, WebGPU, embedded)
  • Supports both std and no_std environments
  • Full opset compliance: all supported operators work across ONNX opset versions 1 through 24
  • Graph simplification (enabled by default): attention coalescing, constant folding, constant shape propagation, idempotent-op elimination, identity-element elimination, CSE, dead code elimination, and permute-reshape detection

Quick Start

Add to your Cargo.toml:

[build-dependencies]
burn-onnx = "0.21"

In your build.rs:

use burn_onnx::ModelGen;

fn main() {
    ModelGen::new()
        .input("src/model/my_model.onnx")
        .out_dir("model/")
        .run_from_script();
}

Include the generated code in src/model/mod.rs:

pub mod my_model {
    include!(concat!(env!("OUT_DIR"), "/model/my_model.rs"));
}

Then use the model:

use burn::backend::NdArray;
use crate::model::my_model::Model;

let model: Model<NdArray<f32>> = Model::default();
let output = model.forward(input_tensor);

For detailed usage instructions, see the ONNX Import Guide in the Burn Book.

Examples

Example Description
onnx-inference Basic ONNX model inference
image-classification-web WebAssembly/WebGPU image classifier

Supported Operators

See the Supported ONNX Operators table for the complete list of supported operators.

Contributing

We welcome contributions! Please read the Contributing Guidelines before opening a PR, and the Development Guide for architecture and implementation details.

For questions and discussions, join us on Discord.

License

Licensed under either of Apache License, Version 2.0 or MIT license at your option.

About

Convert ONNX models into native, backend-agnostic Burn code for inference and fine-tuning.

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors