Skip to content

YogaLYJ/SEGA_IQA

Repository files navigation

SEGA: Signed Ensemble Gaussian Black-box Attack against NR-IQA Models

This repository contains the code and resources for reproducing the main experiments in the TPAMI paper "SEGA: A Transferable Signed Ensemble Gaussian Black-box Attack Method for No-Reference Image Quality Assessment Models."

File Overview

  • SEGA.py: Main implementation of the SEGA attack.
  • test_transferability.py: This script evaluates attack performance across different target models. Note: Results are generated directly from float image data and may show minor variations from the paper's reported results, which are based on saved .bmp images.
  • threatmodels.py: Definitions of NR-IQA threat models used in the experiments.
  • threatmodels_class.py: Model classes for NR-IQA models.
  • livec-test.csv: A CSV file containing image ids and quality labels for testing on the CLIVE dataset.
  • checkpoints/: Directory containing pretrained model checkpoints required for running the attack.

Dependencies

  • Python >= 3.7
  • PyTorch >= 1.8
  • NumPy
  • OpenCV
  • scikit-image

Before Running

  • Download the LIVEC dataset and place the center-cropped images in the following directory:
    'YOUR/PATH/TO/CLIVE/CENTERCROPPED/IMAGES'
    Then, update line 89 in test_transferability.py with this path.
  • Download the pretrained checkpoints from the provided google drive URL or baidu pan URL and place them in the checkpoints/ directory.

How to Use

To run the transferability evaluation, use the following command:

python test_transferability.py --target_model DBCNN --source_model_set LIQE,HyperIQA,LinearityIQA --num_samples 10 --smooth_std 10./255

About

The official code for the TPAMI paper "SEGA: A Transferable Signed Ensemble Gaussian Black-Box Attack against No-Reference Image Quality Assessment Models"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages