Official Pytorch Implementation of the framework COOPERTRIM proposed in our paper COOPERTRIM: Adaptive Data Selection For Uncertainty-Aware Cooperative Perception accepted by ICLR2026.
We present COOPERTRIM, an adaptive feature selection framework in cooperative perception, which enhances representation learning through temporal uncertainty-driven feature selection for bandwidth-efficient, accurate perception in multi-agent systems. It addresses key challenges of relevance, identifying the most impactful features for downstream tasks, and quantity, determining the optimal point to stop sharing based on scene and task complexity. We employed an ϵ-greedy training method that optimizes the bandwidth performance balance by facilitating effective exploration and exploitation during training.
COOPERTRIM Adaptively Requests Data based on Scene Complexity. Increased data requests align with higher scene complexity. Dynamic objects trigger higher request volumes (Frames 1200, 200, 1700), as do complex static elements like intersections (Frames 900, 250, 1600). Solid green lines indicate CooperTrim maintains high IoU despite reduced bandwidth compared to baseline CoBEVT (dashed green lines).
- 12/20/2025: First version of CooperTrim released.
-
Provide easy data API for multiple popular multi-agent perception dataset:
-
Provide APIs to allow users use different sensor modalities
- LiDAR APIs
- Camera APIs
- Radar APIs
-
Provide multiple SOTA 3D detection backbone:
-
Support multiple sparse convolution versions
- Spconv 1.2.1
- Spconv 2.x
-
Support SOTA multi-agent perception models:
-
Provide a convenient log replay toolbox for OPV2V dataset. Check here to see more details.
All the data can be downloaded from UCLA BOX. If you have a good internet, you can directly
download the complete large zip file such as train.zip. In case you suffer from downloading large files, we also split each data set into small chunks, which can be found
in the directory ending with _chunks, such as train_chunks. After downloading, please run the following command to each set to merge those chunks together:
cat train.zip.part* > train.zip
unzip train.zipPlease refer to data introduction and installation guide to prepare data and install CooperTrim. To see more details of OPV2V data, please check our website.
Please check V2V4Real's website to download the data (OPV2V format).
After downloading the data, please put the data in the following structure:
├── v2v4real
│ ├── train
| |── testoutput_CAV_data_2022-03-15-09-54-40_1
│ ├── validate
│ ├── testThis section provides instructions to quickly set up, visualize, train, and test the CooperTrim framework for Cooperative Perception in autonomous driving. Follow the steps below to get started.
Before proceeding with visualization, training, or testing, ensure you have the necessary environment and dependencies set up:
# Clone repo
git clone https://github.com/UCR-CISL/CooperTrim.git
cd CooperTrimGo to any folder of interest : Segmentation_OPV2V / 3D_Detection_OPV2V / 3D_Detection_V2V4Real.
# Setup conda environment
conda env create -f cobevt_env.yaml
# Setup conda environment
conda env create -f opencood_env.yaml
# Setup conda environment
conda env create -f opencood_env.yaml
conda activate {particular}_env
conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch
# Install dependencies
python setup.py build_ext --inplace
python setup.py develop
pip install shapely --only-binary shapely
To quickly visualize a single sample of the data with CooperTrim:
cd CooperTrim
python coopertrim/visualization/visualize_data.py [--scene ${SCENE_NUMBER} --sample ${SAMPLE_NUMBER}]- scene: The ith scene in the data. Default: 4
- sample: The jth sample in the ith scene. Default: 10
Before training, the correct config file needs to be placed in the --model_dir folder (eg. checkpoints_test). The configs are available in configs_folder. Select the config as per training need, copy and paste it to the folder referenced by --model_dir. Rename the file to "config.yaml".
To train CooperTrim using a single GPU------ For Segmentation Task:
python coopertrim/tools/train_perception.py --hypes_yaml ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER}]Example:
python coopertrim/tools/train_perception.py --hypes_yaml coopertrim/checkpoints_test/config.yaml --model_dir coopertrim/checkpoints_testFor Detection tasks:
python opencood/tools/train.py --hypes_yaml opencood/ckp_test/config.yaml --model_dir opencood/ckp_test [--half]
To train CooperTrim using multiple GPUs with distributed training:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --use_env coopertrim/tools/train_perception.py --hypes_yaml coopertrim/checkpoints_orig/config.yaml --model_dir coopertrim/checkpoints_origTo test CooperTrim using a single GPU---------- For Segmentation Task:
python coopertrim/tools/inference_perception.py --model_dir coopertrim/checkpoints_test [--model_type static]The evaluation results will be dumped in the model directory.
For Detection Tasks:
python opencood/tools/inference.py --model_dir opencood/checkpoints_test --fusion_method intermediate
We have a series of tutorials to help you understand OpenCOOD more. Please check the series of our tutorials.
If you are using our CooperTrim framework for your research, please cite the following paper:
@inproceedings{mukhopadhyaycoopertrim,
title={CooperTrim: Adaptive Data Selection for Uncertainty-Aware Cooperative Perception},
author={Mukhopadhyay, Shilpa and Roy-Chowdhury, Amit and Qiu, Hang},
booktitle={The Fourteenth International Conference on Learning Representations}
}

