ipex-llm/python/llm/example/GPU/LLM-Finetuning
Qiyuan Gong f2e923b3ca
Axolotl v0.4.0 support (#10773)
* Add Axolotl 0.4.0, remove legacy 0.3.0 support.
* replace is_torch_bf16_gpu_available
* Add HF_HUB_OFFLINE=1
* Move transformers out of requirement
* Refine readme and qlora.yml
2024-04-17 09:49:11 +08:00
..
axolotl Axolotl v0.4.0 support (#10773) 2024-04-17 09:49:11 +08:00
common Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
DPO Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
HF-PEFT Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
LoRA Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
QA-LoRA Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
QLoRA Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
ReLora Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
README.md Add axolotl v0.3.0 with ipex-llm on Intel GPU (#10717) 2024-04-10 14:38:29 +08:00

Running LLM Finetuning using IPEX-LLM on Intel GPU

This folder contains examples of running different training mode with IPEX-LLM on Intel GPU:

  • LoRA: examples of running LoRA finetuning
  • QLoRA: examples of running QLoRA finetuning
  • QA-LoRA: examples of running QA-LoRA finetuning
  • ReLora: examples of running ReLora finetuning
  • DPO: examples of running DPO finetuning
  • common: common templates and utility classes in finetuning examples
  • HF-PEFT: run finetuning on Intel GPU using Hugging Face PEFT code without modification
  • axolotl: LLM finetuning on Intel GPU using axolotl without writing code

Troubleshooting

  • If you fail to finetune on multi cards because of following error message:

    RuntimeError: oneCCL: comm_selector.cpp:57 create_comm_impl: EXCEPTION: ze_data was not initialized
    

    Please try sudo apt install level-zero-dev to fix it.

  • Please raise the system open file limit using ulimit -n 1048576. Otherwise, there may exist error Too many open files.