Skip to content

LoftQ: edit README.md and example files#1276

Merged
younesbelkada merged 5 commits into
huggingface:mainfrom
yxli2123:loftq
Dec 17, 2023
Merged

LoftQ: edit README.md and example files#1276
younesbelkada merged 5 commits into
huggingface:mainfrom
yxli2123:loftq

Conversation

@yxli2123
Copy link
Copy Markdown
Contributor

Hi,

I would like to update the README.md file and example scripts for LoftQ

Copy link
Copy Markdown
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Left a single comment

return base_model_dir, lora_model_dir


def load_loftq(base_model_path, lora_adapter_path):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this has been removed?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this function was supposed to confirm that everything works fine after the loftq weight initialization step. Hence, not a required step.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect!

Copy link
Copy Markdown
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @yxli2123 for updating README and examples for the LoftQ method, LGTM!

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@younesbelkada younesbelkada merged commit 46a84bd into huggingface:main Dec 17, 2023
BenjaminBossan added a commit to BenjaminBossan/peft that referenced this pull request Dec 18, 2023
Due to PR huggingface#1276, the bug that prevented use of LoftQ with 8bit
quantization has now been fixed. Therefore, the tests no longer need to
be skipped.

Note

I tested locally with GPU and the tests passed.
BenjaminBossan added a commit that referenced this pull request Dec 18, 2023
Due to PR #1276, the bug that prevented use of LoftQ with 8bit
quantization has now been fixed. Therefore, the tests no longer need to
be skipped.
Guy-Bilitski pushed a commit to Guy-Bilitski/peft that referenced this pull request May 13, 2025
* fix when num_bits == 2 or 8

* try 13b
Guy-Bilitski pushed a commit to Guy-Bilitski/peft that referenced this pull request May 13, 2025
Due to PR huggingface#1276, the bug that prevented use of LoftQ with 8bit
quantization has now been fixed. Therefore, the tests no longer need to
be skipped.
cyyever pushed a commit to cyyever/peft that referenced this pull request Sep 4, 2025
* Update sft_trainer.py

* Update trl/trainer/sft_trainer.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants