Skip to content

DCDmllm/TeamLoRA

Repository files navigation

icon TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition

Tianwei Lin1,2, Jiang Liu1,2, Wenqiao Zhang1*, Yang Dai1, Haoyuan Li2, Zhelun Yu2,

Wanggui He2, Juncheng Li1*, Jiannan Guo1, Hao Jiang2, Siliang Tang1, Yueting Zhuang1

1Zhejiang University, 2Alibaba

TeamLoRA

Example Image

🚀 Quick Start

🛠️ Environment Preparation

# 1. clone project
git clone https://github.com/DCDmllm/TeamLoRA.git
cd TeamLoRA
# 2. install environment
conda create -n TeamLoRA python==3.10.14 -y
conda activate TeamLoRA
pip install -r requirements.txt

🗂️ Data Template

[
    {
        "instruction": "",
        "input": "",
        "output": ""
    },
    ...
]

📈 Train

bash ./train.sh

or

python train.py \
    --model_name_or_path Llama-2-7b-hf/ \
    --enable_peft True \
    --lora_rank 8 \
    --lora_dropout 0.1 \
    --lora_bias none \
    --lora_num 4 \
    --train_dataset_path train_datas.json \
    --max_seq_length 1024 \
    --output_dir output/ \
    --do_train True \
    --do_eval False \
    --eval_strategy no \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 2 \
    --learning_rate 2e-4 \
    --weight_decay 0.0 \
    --num_train_epochs 1.0 \
    --lr_scheduler_type cosine \
    --warmup_ratio 0.03 \
    --logging_steps 5 \
    --save_strategy steps \
    --save_steps 88888 \
    --save_total_limit 1 \
    --bf16 True \
    --fp16 False \
    --dataloader_num_workers 4 \
    --report_to wandb \
    --gradient_checkpointing True

🔍 Inference

bash ./eval.sh

or

python eval.py \
    --model_name_or_path Llama-2-7b-hf/ \
    --enable_peft True \
    --lora_rank 8 \
    --lora_dropout 0.1 \
    --lora_bias none \
    --lora_num 4 \
    --lora_weight_path output/adapter_model.bin \
    --instruction "your instruction" \
    --question "you question"

⚡ How to Use in Other Projects?

  1. Copy /my_peft to the respective project
  2. Directly call the following interface to configure TeamLoRA for the model:
from my_peft import TeamLoraConfig, get_peft_model
lora_config = TeamLoraConfig(
    r=lora_rank,
    lora_alpha=lora_rank * 2,
    target_modules= ['k_proj', 'q_proj', 'v_proj', 'o_proj','down_proj', 'gate_proj', 'up_proj'],
    lora_dropout=lora_dropout,
    bias=lora_bias,
    task_type=peft.TaskType.CAUSAL_LM,
    lora_num=lora_num
)
print("loading PEFT...")
model = get_peft_model(model, lora_config)

🔗 Citation

If you found this work useful, please consider giving this repository a star and citing our paper as followed:

@article{lin2024teamlora,
  title={Teamlora: Boosting low-rank adaptation with expert collaboration and competition},
  author={Lin, Tianwei and Liu, Jiang and Zhang, Wenqiao and Li, Zhaocheng and Dai, Yang and Li, Haoyuan and Yu, Zhelun and He, Wanggui and Li, Juncheng and Jiang, Hao and others},
  journal={arXiv preprint arXiv:2408.09856},
  year={2024}
}

⚖️ License

This repository is under Apache License 2.0.

About

【ACL 2025】Official Repo for Paper ‘’TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition‘’

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors