Skip to content

CFRL Feature condition autoencoder  #573

@HeyItsBethany3

Description

@HeyItsBethany3

Hi Team,

Firstly I want to say thank you so much for implementing a truly model-agnostic method for counterfactuals! I've been searching for many months now for a counterfactual tool I can easily use for GBMs and it made my day when I found this method.

I have implemented the CFRL method on a simple model with 12 features, but the counterfactuals are not very sparse, usually sparsity is around 9 or 10. I tried specifying more features to be immutable to enforce sparsity which is not ideal. Is there a way to add feature conditioning into the autoencoder? I am concerned that if I train the autoencoder on 12 variables and then fix 6 variables for instance, the autoencoder may have found important patterns in the variables that I then remove at a later stage.

Do you have any other advice on how to improve sparsity?
I have changed the loss function coefficients but results didn't vary too much. I don't have much experience with tuning autoencoders too. Do you have any advice on the best parameters to start with or focus on, that would make the most difference?

Thank you so much for your help!
Bethany

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions