CoordConformer: Heterogenous EEG datasets decoding using Transformers
Sharat Patil, Robin Tibor Schirrmeister, Frank Hutter, Tonio Ball
This repository contains the implementation of the CoordConformer model along with training, fine-tuning, and testing scripts.
- We present CoordConformer a model capable of training on EEG data with different electrode configurations.
- We introduce a novel method, CoordinateAttention, that uses 3-D coordinates of the electrodes to dynamically generate spatial convolution kernels for feature extraction.
- We use Transformer Encoder architecture based on EEG Conformer[1] with the spatial convolution layer replaced with dynamic convolutions using the CoordinateAttention module.
modules/cropped_modules.py
- Cropped decoding implemented for tranformers.
pl_modules/datamodules.py
- Dataloaders for training on heterogenous datasets (i.e the input dimensions can vary from batch to batch during training). With settings to choose subjects, classes, channels for each dataset.
utils/custom_sampler.py
- Sampler for class balancing within each dataset and balancing number of batches per dataset.
- Python 3.10.12
- CUDA 11.7 (for GPU acceleration)
- Create and activate a Python virtual environment:
python -m venv venv
source venv/bin/activate # On Linux/Mac
# or
.\venv\Scripts\activate # On Windows- Install dependencies:
pip install -r requirements.txtBelow are the commands to train, finetune and test models for the BEETL2021 COmptetion
The data for the test datasets Cyblathon and Weibo has been preprocessed and uploaded here.
For the training data use:
python utils/prepare_datasets.pyTo train the model, use the following command:
python train.py --config=beetl \
expt.name=BEETL \
expt.run=beetl \
expt.seed=76578 \Other hyperparameters are listed in the config files.
To fine-tune a pre-trained model for a specific subject:
python finetune.py --path BEETL/beetl/76578 \
--version 0 \
--test_sub 1 \
trainer.finetune_n_epochs=100 \
train.finetune.lr=0.0005 \
data.batch_size=64 \
CoordConformer.ch_drop=0.0Parameters:
--path: Path to the pre-trained model as {expt.name}/{expt.run}/{expt.seed}--version: Model Version number--test_sub: Subject for which to finetune. For the BEETL Competition the test subjects are 1,2,3,4,5
To test a trained/finetuned model:
python test.py --path BEETL/beetl/76578/finetune/0_finetune/ \
--version 0 \
--test_sub 0Parameters:
--path: Path to the model--version: Model Version number--test_sub: Subject ID for testing. For the BEETL Competition the test subjects are 1,2,3,4,5
@inproceedings{
patil2024coordconformer,
title={CoordConformer: Heterogenous {EEG} datasets decoding using Transformers},
author={Sharat Patil and Robin Tibor Schirrmeister and Frank Hutter and Tonio Ball},
booktitle={ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling},
year={2024},
url={https://openreview.net/forum?id=RlYWwZXlsJ}
}
[1] Gu, X., Han, J., Yang, G.-Z., and Lo, B. Generalizable movement intention recognition with multiple heterogeneous eeg datasets. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9858–9864, 2023. doi: 10.1109/ICRA48891.2023.10160462.
