ipex-llm/python/llm/example/GPU/LLM-Finetuning
2025-02-06 11:18:28 +08:00
..
axolotl Fix application quickstart (#12305) 2024-10-31 16:57:35 +08:00
common Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
DPO Fix DPO finetuning example (#12313) 2024-11-01 13:29:44 +08:00
GaLore fix missing import (#10839) 2024-04-22 14:34:52 +08:00
HF-PEFT Remove accelerate 0.23.0 install command in readme and docker (#11333) 2024-06-17 17:52:12 +08:00
LISA remove unused code again (#12624) 2024-12-27 14:17:11 +08:00
LoRA Support LoRA ChatGLM with Alpaca Dataset (#11580) 2024-07-16 15:40:02 +08:00
QA-LoRA Remove accelerate 0.23.0 install command in readme and docker (#11333) 2024-06-17 17:52:12 +08:00
QLoRA fix qlora finetune example (#12769) 2025-02-06 11:18:28 +08:00
ReLora Remove accelerate 0.23.0 install command in readme and docker (#11333) 2024-06-17 17:52:12 +08:00
README.md ChatGLM3-6B LoRA Fine-tuning Demo (#11450) 2024-07-01 09:18:39 +08:00

Running LLM Finetuning using IPEX-LLM on Intel GPU

This folder contains examples of running different training mode with IPEX-LLM on Intel GPU:

  • LoRA: examples of running LoRA finetuning
  • QLoRA: examples of running QLoRA finetuning
  • QA-LoRA: examples of running QA-LoRA finetuning
  • ReLora: examples of running ReLora finetuning
  • DPO: examples of running DPO finetuning
  • common: common templates and utility classes in finetuning examples
  • HF-PEFT: run finetuning on Intel GPU using Hugging Face PEFT code without modification
  • axolotl: LLM finetuning on Intel GPU using axolotl without writing code

Verified Models

Model Finetune mode Frameworks Support
LLaMA 2/3 LoRA, QLoRA, QA-LoRA, ReLora HF-PEFT, axolotl
Mistral LoRA, QLoRA DPO
ChatGLM 3 LoRA, QLoRA HF-PEFT
Qwen-1.5 QLoRA HF-PEFT
Baichuan2 QLoRA HF-PEFT

Troubleshooting

  • If you fail to finetune on multi cards because of following error message:

    RuntimeError: oneCCL: comm_selector.cpp:57 create_comm_impl: EXCEPTION: ze_data was not initialized
    

    Please try sudo apt install level-zero-dev to fix it.

  • Please raise the system open file limit using ulimit -n 1048576. Otherwise, there may exist error Too many open files.

  • If application raise wandb.errors.UsageError: api_key not configured (no-tty). Please login wandb or disable wandb login with this command:

export WANDB_MODE=offline
  • If application raise Hugging Face related errors, i.e., NewConnectionError or Failed to download etc. Please download models and datasets, set model and data path, then set HF_HUB_OFFLINE with this command:
export HF_HUB_OFFLINE=1