Is it possible to "unload" the PEFT LoRA weights after mutating the base model with PeftModel.from_pretrained?
I'd like to load multiple LoRA models on top of a base model, and unloading the whole base model every time is time consuming. Was wondering if there's a way to un-load the PEFT model and have the base model remain in memory.