Skip to content

pilar12/CoordConformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoordConformer

CoordConformer: Heterogenous EEG datasets decoding using Transformers

Sharat Patil, Robin Tibor Schirrmeister, Frank Hutter, Tonio Ball

This repository contains the implementation of the CoordConformer model along with training, fine-tuning, and testing scripts.

Model

  • We present CoordConformer a model capable of training on EEG data with different electrode configurations.
  • We introduce a novel method, CoordinateAttention, that uses 3-D coordinates of the electrodes to dynamically generate spatial convolution kernels for feature extraction.
  • We use Transformer Encoder architecture based on EEG Conformer[1] with the spatial convolution layer replaced with dynamic convolutions using the CoordinateAttention module.

Model Architecture

Additional useful modules:

modules/cropped_modules.py

pl_modules/datamodules.py

  • Dataloaders for training on heterogenous datasets (i.e the input dimensions can vary from batch to batch during training). With settings to choose subjects, classes, channels for each dataset.

utils/custom_sampler.py

  • Sampler for class balancing within each dataset and balancing number of batches per dataset.

Setup tested with

  • Python 3.10.12
  • CUDA 11.7 (for GPU acceleration)

Setup

  1. Create and activate a Python virtual environment:
python -m venv venv
source venv/bin/activate  # On Linux/Mac
# or
.\venv\Scripts\activate  # On Windows
  1. Install dependencies:
pip install -r requirements.txt

Usage

Below are the commands to train, finetune and test models for the BEETL2021 COmptetion

1. Setup Data

The data for the test datasets Cyblathon and Weibo has been preprocessed and uploaded here.

For the training data use:

python utils/prepare_datasets.py

1. Training

To train the model, use the following command:

python train.py --config=beetl \
                expt.name=BEETL \
                expt.run=beetl \
                expt.seed=76578 \

Other hyperparameters are listed in the config files.

2. Fine-tuning

To fine-tune a pre-trained model for a specific subject:

python finetune.py --path BEETL/beetl/76578 \
                   --version 0 \
                   --test_sub 1 \
                   trainer.finetune_n_epochs=100 \
                   train.finetune.lr=0.0005 \
                   data.batch_size=64 \
                   CoordConformer.ch_drop=0.0

Parameters:

  • --path: Path to the pre-trained model as {expt.name}/{expt.run}/{expt.seed}
  • --version: Model Version number
  • --test_sub: Subject for which to finetune. For the BEETL Competition the test subjects are 1,2,3,4,5

3. Testing

To test a trained/finetuned model:

python test.py --path BEETL/beetl/76578/finetune/0_finetune/ \
               --version 0 \
               --test_sub 0

Parameters:

  • --path: Path to the model
  • --version: Model Version number
  • --test_sub: Subject ID for testing. For the BEETL Competition the test subjects are 1,2,3,4,5

Citation

@inproceedings{
patil2024coordconformer,
title={CoordConformer: Heterogenous {EEG} datasets decoding using Transformers},
author={Sharat Patil and Robin Tibor Schirrmeister and Frank Hutter and Tonio Ball},
booktitle={ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling},
year={2024},
url={https://openreview.net/forum?id=RlYWwZXlsJ}
}

[1] Gu, X., Han, J., Yang, G.-Z., and Lo, B. Generalizable movement intention recognition with multiple heterogeneous eeg datasets. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9858–9864, 2023. doi: 10.1109/ICRA48891.2023.10160462.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages