WIP: SALM with NeMo Automodel integration for Nemotron Nano V3 LLM backbone#15447
WIP: SALM with NeMo Automodel integration for Nemotron Nano V3 LLM backbone#15447
Conversation
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
…eechlm-yifan-mod-port
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
…IDIA-NeMo/NeMo into speechlm2-with-nemo-automodel
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
…automodel's utility Signed-off-by: Piotr Żelasko <petezor@gmail.com>
…full LLM Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <pzelasko@nvidia.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
…converted models Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Implements NemotronNanoV3PromptFormatter (NAME="nemotron-nano-v3") using ChatML-style <|im_start|>/<|im_end|> template with encode_dialog override that handles: auto-insert empty system turn, history thinking truncation, <think></think> prepend for non-thinking assistant turns, and dynamic inference prefix (thinking on/off). Includes Lhotse Cut integration via registered_prompt_format_fn. Verified against HF apply_chat_template for nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 (both string and token match). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
|
Trying to decide if we should make SALM backward compatible with vanilla |
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
| import json |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 16 days ago
In general, the correct way to fix an unused import in Python is to remove the import statement if the module is never referenced in the file. This reduces visual clutter, avoids implying unnecessary dependencies, and can slightly speed up module import time.
Here, the best fix is to delete the import json line in nemo/collections/common/data/lhotse/text_adapters.py (line 14 in the provided snippet), leaving the rest of the imports unchanged. No additional methods, definitions, or replacement imports are needed, since no code in the shown region uses json. This change preserves all existing functionality because it only removes an unused symbol.
| @@ -11,7 +11,6 @@ | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
| import json | ||
| import logging | ||
| import math | ||
| import random |
| # for turn in turns: | ||
| # if turn["role"] == "user" or turn["role"] == "system": | ||
| # if "/think" in turn["slots"]["message"]: | ||
| # enable_thinking = True | ||
| # elif "/no_think" in turn["slots"]["message"]: | ||
| # enable_thinking = False |
Check notice
Code scanning / CodeQL
Commented-out code Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 16 days ago
In general, to fix commented-out code you either (a) reinstate it as active code because it is required, or (b) remove it (or convert it into concise explanatory comments) if the behavior is not in use. Here, the function already accepts an enable_thinking flag and the commented block redundantly recalculates it from the content of system/user turns; since this logic is disabled and the docstring describes enable_thinking as a parameter, the least disruptive fix is to remove the commented-out code while preserving the surrounding explanatory comments about step 1. Concretely, in nemo/collections/common/prompts/qwen.py, inside Qwen3PromptFormatter.encode_dialog, delete lines 99–105 that begin with # enable_thinking = True and the subsequent commented for turn in turns: loop. No new methods or imports are needed.
| @@ -96,13 +96,6 @@ | ||
|
|
||
| # 1) (Inference, Optional) Determine if thinking is enabled in user or system turns. | ||
| # If multiple turns have the tag, we will use the last one. | ||
| # enable_thinking = True # By default, it is enabled according to Qwen3 prompt format | ||
| # for turn in turns: | ||
| # if turn["role"] == "user" or turn["role"] == "system": | ||
| # if "/think" in turn["slots"]["message"]: | ||
| # enable_thinking = True | ||
| # elif "/no_think" in turn["slots"]["message"]: | ||
| # enable_thinking = False | ||
|
|
||
| # 2) (Training and Inference) Remove thinking content from previous turns. | ||
| for turn in turns[:-1]: |
| with loss_parallel(): | ||
| super().backward(*args, **kwargs) | ||
|
|
||
| def configure_gradient_clipping(self, optimizer, gradient_clip_val, gradient_clip_algorithm=None): |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 1 day ago
In general, to fix “explicit + implicit return” issues, ensure that all control-flow paths of a function use explicit return statements, and preferably return the same type (or at least make return None explicit where appropriate). Here, configure_gradient_clipping is a PyTorch Lightning hook whose return value is unused; the method is meant to perform side effects only. The best minimal fix is to add an explicit return None at the end of the method so that the FSDP-specific branch and the case with no parameters both end in an explicit return, while keeping the delegation to super().configure_gradient_clipping(...) unchanged.
Concretely, in nemo/collections/speechlm2/models/salm.py, within SALM.configure_gradient_clipping, after the if params: block, add return None (properly indented). No imports or additional definitions are needed, and no behavior changes: the method already implicitly returns None on that path.
| @@ -340,6 +340,7 @@ | ||
| params = [p for group in optimizer.param_groups for p in group["params"] if p.grad is not None] | ||
| if params: | ||
| _clip_grad_norm_impl(params, max_norm=gradient_clip_val) | ||
| return None | ||
|
|
||
| @torch.no_grad() | ||
| def generate( |
| import torch | ||
| from lhotse import CutSet, SupervisionSegment | ||
| from lhotse.testing.dummies import dummy_cut, dummy_recording | ||
| from omegaconf import DictConfig, OmegaConf |
Check notice
Code scanning / CodeQL
Unused import Note test
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 1 day ago
To fix an unused import, remove only the unused symbol while keeping any used ones. Here, DictConfig is used extensively, but OmegaConf is not. The best fix is to adjust the import statement on line 22 so it only imports DictConfig.
Concretely, in tests/collections/speechlm2/test_salm_automodel_lora.py, replace from omegaconf import DictConfig, OmegaConf with from omegaconf import DictConfig. No other code changes are required since nothing references OmegaConf.
| @@ -19,7 +19,7 @@ | ||
| import torch | ||
| from lhotse import CutSet, SupervisionSegment | ||
| from lhotse.testing.dummies import dummy_cut, dummy_recording | ||
| from omegaconf import DictConfig, OmegaConf | ||
| from omegaconf import DictConfig | ||
|
|
||
| from nemo.collections.common.data.lhotse import NeMoMultimodalConversation | ||
| from nemo.collections.common.data.lhotse.text_adapters import AudioTurn, TextTurn |
| from nemo.collections.speechlm2.parts.automodel_lora import ( | ||
| LORA_PARAM_PATTERN, | ||
| ensure_lora_trainable, | ||
| make_peft_config, | ||
| maybe_install_lora, | ||
| ) |
Check notice
Code scanning / CodeQL
Unused import Note test
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 1 day ago
To fix an unused-import problem, the standard approach is to remove the specific symbol from the import statement (or remove the entire import if nothing from it is used). In this case, the import is a multi-name import from nemo.collections.speechlm2.parts.automodel_lora, and the only unused symbol is maybe_install_lora.
The best fix that does not change functionality is to edit the from nemo.collections.speechlm2.parts.automodel_lora import (...) block and delete maybe_install_lora, while leaving the other imported names (LORA_PARAM_PATTERN, ensure_lora_trainable, make_peft_config) untouched. This change should be made in tests/collections/speechlm2/test_salm_automodel_lora.py around lines 30–35 where the multi-line import is defined. No new methods, imports, or definitions are required.
| @@ -31,7 +31,6 @@ | ||
| LORA_PARAM_PATTERN, | ||
| ensure_lora_trainable, | ||
| make_peft_config, | ||
| maybe_install_lora, | ||
| ) | ||
|
|
||
| if torch.cuda.is_available(): |
(copying my comment from Slack here) In the current PR, does it already work with HF Automodel and NeMo Automodel? If yes, it looks fine to me. Most of the complexity around model loading seems to be in configure_model() and some utility functions in pretrained.py . Other than that, the annoying thing is to have to put DTensor to a full tensor for some operations (I had to do the same for adding audio generation head), but I think it's not too bad. |
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Important
The
Update branchbutton must only be pressed in very rare occassions.An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.
What does this PR do ?
Add a one line overview of what this PR aims to accomplish.
Collection: [Note which collection this PR will affect]
Changelog
Usage
# Add a code snippet demonstrating how to use thisGitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information