Adversarial attacks impact not just image classification but also object detection tasks. In this project, we've introduced both global perturbation and patch-based adversarial attacks to assess the robustness of object detection models. Our framework seamlessly integrates with the widely-used mmdet library, providing an accessible platform for researchers and developers.
-
Integrated with mmdet
- Compatible with plenty of models from mmdet. Assess their adversarial robustness using the provided config and weight files.
-
Global perturbation attack
- Employ FGSM, BIM, and PGD techniques to test adversarial robustness.
-
Patch-based attack
- Adversarial patches optimized gradient descent.
- Shared patch for objects of the same class.
- Each object receives a central patch.
-
Visualization
- Adversarial images can be saved easily for comparison, analysis, and data augmentation.
-
Distributed training and testing
- Pytorch distributed data-parallel training and testing are supported for faster training and testing.
Dataset
- We evaluate the robustness of detection models on the coco2017 val dataset. Please download coco2017 dataset first. The validation set
val2017folder and annotations are needed by default. If you want to use your datasets, please convert them to coco-style with the corresponding metainfo.
Detection Model
- Train object detectors using mmdet or directly download the detector weight files and config files provided by mmdet. You'd better use the complete config file generated by mmdet itself.
-
Modify detector config files
- Modify all the
data_rootattributes in detector config files to your correct path, for example,data_root=/path_to_your_datasets/coco2017/. There are multipledata_rootattributes in a mmdet-style config file. Please make sure that alldata_rootattributes are modified correctly. - Modify the
ann_fileattribute in thetest_evaluatorattribute to your correct path, for example,ann_file= /path_to_your_dataset/coco2017/annotations/instances_val2017.json. - If you use your datasets, the config files generated in the training process can usually be directly used. Specifically, please make sure that you have provided
metainfoattribute for the attributedatasetintrain_dataloaderandtest_dataloaderas follows:metainfo = {'classes': ['cls_1', 'cls_2', '...', 'cls_n']} train_dataloader = dict(batch_size=4, num_workers=4, dataset=dict(data_root='/path_to_your_datasets/coco2017/', metainfo=metainfo, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='images') )) test_dataloader = dict(batch_size=4, num_workers=4, dataset=dict(data_root='/path_to_your_datasets/coco2017/', metainfo=metainfo, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='images') ))
- Modify all the
-
Global perturbation attack
- Modify the
detectorattribute in the configs/global_demo.py file according to your detector config file and weight file paths. - Run the following command to start:
Besides, you can also overwrite the
CUDA_VISIBLE_DEVICES=0 python run.py --cfg configs/global_demo.py
detectorattribute in the console and start by the following command:CUDA_VISIBLE_DEVICES=0 python run.py --cfg configs/demo.py --cfg-options detector.cfg_file=/path_to_your_detector_cfg_file detector.weight_file=/path_to_your_detector_weight_file
- For more attack configurations, please refer to configs/global/base.py. Your can overwrite them
in the
global_demo.pyfile as you want . Up to now, FGSM, BIM, MIM, TIM, DI_FGSM, SI_NI_FGSM, VMI_FGSM and PGD attack methods are supported for the global perturbation attack.
- Modify the
-
Patch-based attack
- Modify the
detectorattribute in the configs/patch_demo.py file according to your detector config file and weight file paths. - Run the following command to start:
CUDA_VISIBLE_DEVICES=0 python run.py --cfg configs/patch_demo.py
- For more attack configurations, please refer to configs/patch/base.py. You can overwrite them in the
global_demo.pyfile as you want.
- Modify the
-
Distributed training and testing
Pytorch distributed dataparallel (DDP) is supported. To start DDP training or testing, please refer to the run_dist.sh for details.
-
Evaluating non-mmdet detectors
If you want to evalute non-mmdet detectors, you may try following steps:
- Convert dataset to coco-style.
- Generate a mmdet-style config file containing a
test_dataloader,train_dataloader(if needed), and atest_evaluator. - Modify your detection model code. Specifically, you are required to add a data_preprocessor, a loss function and a predict function. See
ares.attack.detection.custom.detector.CustomDetectorfor details. - Replace
detector = MODELS.build(detector_cfg.model)in therun.pyfile with your detector initialization code.
-
Evaluation of some object detection models
Attack settings: global perturbation attack using PGD with eps=2 under L
$\infty$ normDetector Config Weight IoU Area MaxDets AP (clean) AP (attacked) Faster R-CNN config weight 0.50:0.95 all 100 0.422 0.041 YOLO v3 config weight 0.50:0.95 all 100 0.337 0.062 SSD config weight 0.50:0.95 all 100 0.295 0.039 RetinaNet config weight 0.50:0.95 all 100 0.365 0.027 CenterNet config weight 0.50:0.95 all 100 0.401 0.070 FCOS config weight 0.50:0.95 all 100 0.422 0.045 DETR config weight 0.50:0.95 all 100 0.397 0.074 Deformable DETR config weight 0.50:0.95 all 100 0.469 0.067 DINO config weight 0.50:0.95 all 100 0.570 0.086 YOLOX config weight 0.50:0.95 all 100 0.491 0.098 -
Visualizations
Detector: FCOS
Adversarial image: PGD attack with eps=5/255 under L$\infty$ setting
GT bboxes
(clean image)
predicted bboxes
(clean image)
predicted bboxes
(adversarial image)
Many thanks to these excellent open-source projects: