ipex-llm/python/llm/example/GPU/LLM-Finetuning/QLoRA
Qiyuan Gong 1210491748
ChatGLM3, Baichuan2 and Qwen1.5 QLoRA example (#11078)
* Add chatglm3, qwen15-7b and baichuan-7b QLoRA alpaca example
* Remove unnecessary tokenization setting.
2024-05-21 15:29:43 +08:00
..
alpaca-qlora ChatGLM3, Baichuan2 and Qwen1.5 QLoRA example (#11078) 2024-05-21 15:29:43 +08:00
simple-example Upgrade Peft to 0.10.0 in finetune examples and docker (#10930) 2024-05-07 15:12:26 +08:00
trl-example Upgrade Peft to 0.10.0 in finetune examples and docker (#10930) 2024-05-07 15:12:26 +08:00
README.md Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00

QLoRA Finetuning with IPEX-LLM

We provide Alpaca-QLoRA example, which ports Alpaca-LoRA to IPEX-LLM (using QLoRA algorithm) on Intel GPU.

Meanwhile, we also provide a simple example to help you get started with QLoRA Finetuning using IPEX-LLM, and TRL example to help you get started with QLoRA Finetuning using IPEX-LLM and TRL library.