Skip to content

ENH: Add Epochs.score_quality() for data-driven epoch quality scoring#13710

Open
aman-coder03 wants to merge 3 commits intomne-tools:mainfrom
aman-coder03:enh-epoch-score-quality
Open

ENH: Add Epochs.score_quality() for data-driven epoch quality scoring#13710
aman-coder03 wants to merge 3 commits intomne-tools:mainfrom
aman-coder03:enh-epoch-score-quality

Conversation

@aman-coder03
Copy link
Contributor

Reference issue (if any)

Closes #13676

What does this implement/fix?

Adds a score_quality() method to Epochs that scores each epoch on a 0 and 1 scale based on how much of an outlier it is relative to the rest of the recording. It uses peak-to-peak amplitude, variance, and kurtosis, z-scored robustly using median absolute deviation, no new dependencies.
The idea is to give users a quick, data-driven starting point before calling drop_bad(), instead of guessing thresholds from scratch. It's not trying to replace autoreject, just fill the gap for users who want something lightweight and built-in.

Additional information

Happy to adjust the API or scoring logic based on feedback. The main open question from the issue whether suggest_reject=True is worth adding, I've left out for now to keep the initial PR focused.

@CarinaFo
Copy link
Contributor

CarinaFo commented Mar 2, 2026

Hi,
I fully agree that this is a neat feature, but I am not sure about the use case.

I intuitively thought about the reject parameter in the epochs class. Here epochs are being rejected based on maximum peak-to-peak signal amplitude (PTP).

From my experience most users play around with this threshold to get a feeling for the amount of epochs being rejected. The function you implemented can inform the user of a threshold for rejecting bad epochs based on PTP etc., but I do think that it won't be useful to inform a threshold for epochs reject or autoreject.

It seems to me that it adds a layer of abstraction on rejection of noisy epochs, but maybe I misunderstood the use case you had in mind?

@aman-coder03
Copy link
Contributor Author

thanks for the feedback @CarinaFo
You are right that the use case isn't clear enough. The score isn't meant to directly inform the reject= threshold (since those are in physical units like µV and the score is just a unitless 0–1 ranking). It's more of an exploratory tool, a quick way to see which epochs stand out before deciding what to do with them, without having to scroll through everything manually or set up autoreject.

Think of it as answering "which epochs should I look at first?" rather than "what threshold should I use?". Especially handy for large datasets where manual inspection isn't realistic.

Happy to make this clearer in the docstring if that helps. And if the general feeling is that this doesn't add enough on top of what's already there, I'm open to that too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ENH: Add epochs.score_quality() native data-driven epoch quality scoring

2 participants