Install environment
conda env create -f environment.yml
conda activate jedi-linearbash prepare_dataset.shThe models shown in the tables in the papers are included in official_models.tar.gz.
You can extract them with:
tar -xvf official_models.tar.gzIf you want to retrain the models, you can do so with the following commands. This will launch a whole Pareto scan, so many models will be saved.
KERAS_BACKEND=jax python jet_classifier.py -c configs/<config_file> -r trainThe outputs are already included in the official_models.tar.gz, but you can validate them with:
KERAS_BACKEND=jax python jet_classifier.py -c configs/<config_file> -r test verilogThe Verilator may require a newer C++ compiler. We tested our code with g++ 15.1.1.
Due to size consideration, we removed the Vivado project files, but only included the gererated reports.
cd <output_directory>/<model_directory>/da4ml_verilog_prjs/<verilog_project>
vivado -mode batch -source build_prj.tcl
# Starting v0.5.x, the da4ml generated projects layout changed, and the synthesis script is now named build_vivado_prj.tcl
# vivado -mode batch -source build_vivado_prj.tclUsing the included load_summary.py script in the tarball, you can generate the json report from all the synthesis results:
cd <output_directory>
for p in *-feature*; do
for N in 8 16 32 64 128; do
name=$(basename $p)
for f in $p/*$N; do python3 load_summary.py -e $f/test_acc.json $f/da4ml_verilog_prjs/* -o summary/$N-particle-$name.json; done
done
done @inproceedings{jedi-linear,
title={JEDI-linear: Fast and Efficient Graph Neural Networks for Jet Tagging on FPGAs},
author={Que, Zhiqiang and Sun, Chang and Paramesvaran, Sudarshan and Clement, Emyr and Karakoulaki, Katerina and Brown, Christopher and Laatu, Lauri and Cox, Arianna and Tapper, Alexander and Luk, Wayne and Spiropulu, Maria},
booktitle={2025 International Conference on Field Programmable Technology (FPT)},
year={2025},
organization={IEEE}
}
This repository contains the code for the paper "JEDI-linear: Fast and Efficient Graph Neural Networks for Jet Tagging on FPGAs" (https://arxiv.org/abs/2508.15468). The code can be run as follows:
- Clone the repository, install the dependencies. Also install one of the backends for training (
jax,tensorflow, orpytorch)pip install -r requirements.txt pip install <your_backend>
- Download the dataset from https://zenodo.org/records/3602260
- Extract the training and testing (the validation split downloaded), prepare them with
Place both
python prepare_dataset.py -i /tmp/<train/validation>/ -o /tmp/<train/test>.h5 -j <n_processes>
train.h5andtest.h5in the same directory, e.g.,/tmp/jet_data/. - Modifying the configs to have the
datapathpoint to the dataset directory, and change the output directorysave_pathif needed. - Run the training script:
where
KERAS_BACKEND=<YOUR_BACKEND> python jet_classifier -c <CONFIG_FILE> -r train test verilog
<YOUR_BACKEND>can bejax,tensorflow, ortorchdepending on the backend you installed. The configs are located inconfigs/. The-n$numberpart of the config file is the maximum number of particles to be used;-3means onlypt, eta, phiare used, otherwise all 16 features are used;uq1means the network is uniformly quantized over the particle dimension and is permutation-invariant.