ipex-llm/python/llm/example/GPU/LLM-Finetuning/QLoRA
Qiyuan Gong 120a0035ac
Fix type mismatch in eval for Baichuan2 QLora example (#11117)
* During the evaluation stage, Baichuan2 will raise type mismatch when training with bfloat16. Fix this issue by modifying modeling_baichuan.py. Add doc about how to modify this file.
2024-05-24 14:14:30 +08:00
..
alpaca-qlora Fix type mismatch in eval for Baichuan2 QLora example (#11117) 2024-05-24 14:14:30 +08:00
simple-example Upgrade Peft to 0.10.0 in finetune examples and docker (#10930) 2024-05-07 15:12:26 +08:00
trl-example Upgrade Peft to 0.10.0 in finetune examples and docker (#10930) 2024-05-07 15:12:26 +08:00
README.md Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00

QLoRA Finetuning with IPEX-LLM

We provide Alpaca-QLoRA example, which ports Alpaca-LoRA to IPEX-LLM (using QLoRA algorithm) on Intel GPU.

Meanwhile, we also provide a simple example to help you get started with QLoRA Finetuning using IPEX-LLM, and TRL example to help you get started with QLoRA Finetuning using IPEX-LLM and TRL library.