[ckpt, model] fix: preserve lora_alpha in model_merger via training meta#5326
Open
Yatogaii wants to merge 3 commits intoverl-project:mainfrom
Open
[ckpt, model] fix: preserve lora_alpha in model_merger via training meta#5326Yatogaii wants to merge 3 commits intoverl-project:mainfrom
Yatogaii wants to merge 3 commits intoverl-project:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Code Review
This pull request introduces a solid fix for preserving lora_alpha during model merging by persisting LoRA training metadata. The approach of saving lora_train_meta.json during training and then using it as the source of truth in the merger is clean and effective. The changes in base_model_merger.py to read this metadata and handle potential mismatches with warnings are well-implemented. I've identified one critical issue in the checkpoint saving logic that could lead to a crash if certain configuration values are None. Please see my detailed comment.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
This PR fixes the issue where
model_mergergenerates a merged LoRA adapter withlora_alpha=0, even though a non-zero value was specified during training.The fix is done by persisting the LoRA training-time config and updating the base merger logic to read it as the source of truth during merge.
Key points:
lora_train_meta.json.lora_train_meta.jsonwhen merging, ensuringlora_alphais correct.Compared to #3100:
lora_alpha = rank * 2)..ptweights instead of reading the original training configuration.Checklist Before Starting
Search for similar PRs. Paste at least one query link here:
Format the PR title as
[{modules}] {type}: {description}(This will be checked by the CI)Test
This change is not covered by CI.
I validated the fix with a local end-to-end workflow:
Train SFT + LoRA with:
model.lora_rank=8model.lora_alpha=16Merge with:
Verify the merged adapter config:
lora_alpha = 0lora_alpha = 16API and Usage Example
No API changes are introduced.
This PR only adds an additional training artifact:
lora_train_meta.jsonThe merge process remains unchanged for users.
Design & Code Changes
Design
lora_train_meta.json.lora_train_meta.jsonand uses it as the source of truth for LoRA parameters.This PR intentionally does not rely on
adapter_config.json, since that file may be ambiguous (it may represent merged/runtime state rather than training-time configuration).Code Changes
Save LoRA training metadata during SFT training.
Update
model_mergerbase merge logic:lora_train_meta.json.ptweightsChecklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysAdd / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs).
Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why:
[] Once your PR is ready for CI, send a message in the
ci-requestchannel in theverlSlack workspace. (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)If your PR is related to the
recipesubmodule, please also update the reference to the submodule commit viagit submodule update --remoteorcd recipe && git pull origin main.