Skip to content

[BUG] model_merger sets lora_alpha=0 when merging SFT LoRA checkpoints #5325

@Yatogaii

Description

@Yatogaii

System Info

No relevant with system info.

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

  1. Train an SFT model with LoRA enabled, for example:
model.lora_rank=8 
model.lora_alpha=16

2.Merge the checkpoint using:

python3 -m verl.model_merger merge
  1. Inspect the generated LoRA config in the merged output directory.
    The merged adapter_config.json contains:
    "lora_alpha": 0,

Expected behavior

lora_alpha in the merged checkpoint should match the value specified during training (e.g., 16 in this example).

As mentioned in #3050, this issue still exists in the current version.
This behavior effectively makes SFT + LoRA workflows unusable after merging, since the scaling factor is lost.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions