ipex-llm/python/llm/example/GPU/LLM-Finetuning/QLoRA
Qiyuan Gong 15a6205790
Fix LoRA tokenizer for Llama and chatglm (#11186)
* Set pad_token to eos_token if it's None. Otherwise, use model config.
2024-06-03 15:35:38 +08:00
..
alpaca-qlora Fix LoRA tokenizer for Llama and chatglm (#11186) 2024-06-03 15:35:38 +08:00
simple-example Upgrade Peft to 0.10.0 in finetune examples and docker (#10930) 2024-05-07 15:12:26 +08:00
trl-example Upgrade Peft to 0.10.0 in finetune examples and docker (#10930) 2024-05-07 15:12:26 +08:00
README.md Update_document by heyang (#30) 2024-03-25 10:06:02 +08:00

QLoRA Finetuning with IPEX-LLM

We provide Alpaca-QLoRA example, which ports Alpaca-LoRA to IPEX-LLM (using QLoRA algorithm) on Intel GPU.

Meanwhile, we also provide a simple example to help you get started with QLoRA Finetuning using IPEX-LLM, and TRL example to help you get started with QLoRA Finetuning using IPEX-LLM and TRL library.