Skip to content

option to keep multiple models in memory#12227

Merged
AUTOMATIC1111 merged 5 commits intodevfrom
multiple_loaded_models
Aug 5, 2023
Merged

option to keep multiple models in memory#12227
AUTOMATIC1111 merged 5 commits intodevfrom
multiple_loaded_models

Conversation

@AUTOMATIC1111
Copy link
Owner

@AUTOMATIC1111 AUTOMATIC1111 commented Jul 31, 2023

Description

  • option to specify how many loaded models you want to keep in memory
  • additional checkbox that lets you decide whether you want to keep just the one model in VRAM or all of them
  • if switching to an already loaded model, it's either instantaneous (if the checkbox is not checked) or very fast, something like 1s for me (if the checkbox is checked)
  • obsoletes the Checkpoints to cache in RAM setting.
  • tested to work with --medvram
  • Loras tested to work properly
  • additional things I put into the PR despite telling others to not put extra stuff
    • suppressed sgm/ldm print statements can be restored in settings
    • do_inpainting_hijack removed, the function is hijacked at startup instead
    • fixed a bug where get_empty_cond would create empty prompt cond using Lora weights

Screenshots/videos:

firefox_N63eoA4M7X

Checklist:

@AUTOMATIC1111 AUTOMATIC1111 merged commit c613416 into dev Aug 5, 2023
@AUTOMATIC1111 AUTOMATIC1111 deleted the multiple_loaded_models branch August 5, 2023 04:52
@ghost
Copy link

ghost commented Oct 6, 2023

This introduced an issue where models get corrupted after reusing the older model to load the new.
Read #13516 for a temporary fix until it's sorted out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant