Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions chapters/en/chapter12/5.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_id)

Now, let's load the LoRA configuration. We'll take advantage of LoRA to reduce the number of trainable parameters, and in turn the memory footprint we need to fine-tune the model.

If you're not familiar with LoRA, you can read more about it in [Chapter 11](https://huggingface.co/learn/course/en/chapter11/3).
If you're not familiar with LoRA, you can read more about it in [Chapter 11](https://huggingface.co/learn/llm-course/chapter11/4).

```python
# Load LoRA
Expand Down Expand Up @@ -109,7 +109,7 @@ def reward_len(completions, **kwargs):

Now, let's define the training arguments. We'll use the `GRPOConfig` class to define the training arguments in a typical `transformers` style.

If this is the first time you're defining training arguments, you can check the [TrainingArguments](https://huggingface.co/docs/transformers/en/main_classes/trainer#trainingarguments) class for more information, or [Chapter 2](https://huggingface.co/learn/course/en/chapter2/1) for a detailed introduction.
If this is the first time you're defining training arguments, you can check the [TrainingArguments](https://huggingface.co/docs/transformers/en/main_classes/trainer#trainingarguments) class for more information, or [Chapter 2](https://huggingface.co/learn/llm-course/en/chapter2/1) for a detailed introduction.

```python
# Training arguments
Expand Down