ipex-llm/python/llm/example/GPU/LLM-Finetuning
Heyang Sun cd109bb061
Gemma QLoRA example (#12969)
* Gemma QLoRA example

* Update README.md

* Update README.md

---------

Co-authored-by: sgwhat <ge.song@intel.com>
2025-03-14 14:27:51 +08:00
..
axolotl remove load_in_8bit usage as it is not supported a long time ago (#12779) 2025-02-07 11:21:29 +08:00
common Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
DPO fix dpo finetune (#12774) 2025-02-06 16:35:21 +08:00
GaLore fix galore and peft finetune example (#12776) 2025-02-06 16:36:13 +08:00
HF-PEFT fix galore and peft finetune example (#12776) 2025-02-06 16:36:13 +08:00
LISA fix lisa finetune example (#12775) 2025-02-06 16:35:43 +08:00
LoRA update more lora example (#12785) 2025-02-08 09:46:48 +08:00
QA-LoRA update more lora example (#12785) 2025-02-08 09:46:48 +08:00
QLoRA Gemma QLoRA example (#12969) 2025-03-14 14:27:51 +08:00
ReLora update more lora example (#12785) 2025-02-08 09:46:48 +08:00
README.md ChatGLM3-6B LoRA Fine-tuning Demo (#11450) 2024-07-01 09:18:39 +08:00

Running LLM Finetuning using IPEX-LLM on Intel GPU

This folder contains examples of running different training mode with IPEX-LLM on Intel GPU:

  • LoRA: examples of running LoRA finetuning
  • QLoRA: examples of running QLoRA finetuning
  • QA-LoRA: examples of running QA-LoRA finetuning
  • ReLora: examples of running ReLora finetuning
  • DPO: examples of running DPO finetuning
  • common: common templates and utility classes in finetuning examples
  • HF-PEFT: run finetuning on Intel GPU using Hugging Face PEFT code without modification
  • axolotl: LLM finetuning on Intel GPU using axolotl without writing code

Verified Models

Model Finetune mode Frameworks Support
LLaMA 2/3 LoRA, QLoRA, QA-LoRA, ReLora HF-PEFT, axolotl
Mistral LoRA, QLoRA DPO
ChatGLM 3 LoRA, QLoRA HF-PEFT
Qwen-1.5 QLoRA HF-PEFT
Baichuan2 QLoRA HF-PEFT

Troubleshooting

  • If you fail to finetune on multi cards because of following error message:

    RuntimeError: oneCCL: comm_selector.cpp:57 create_comm_impl: EXCEPTION: ze_data was not initialized
    

    Please try sudo apt install level-zero-dev to fix it.

  • Please raise the system open file limit using ulimit -n 1048576. Otherwise, there may exist error Too many open files.

  • If application raise wandb.errors.UsageError: api_key not configured (no-tty). Please login wandb or disable wandb login with this command:

export WANDB_MODE=offline
  • If application raise Hugging Face related errors, i.e., NewConnectionError or Failed to download etc. Please download models and datasets, set model and data path, then set HF_HUB_OFFLINE with this command:
export HF_HUB_OFFLINE=1