ipex-llm/python/llm/example/GPU/LLM-Finetuning
Qiyuan Gong d30b22a81b
Refine axolotl 0.3.0 documents and links (#10764)
* Refine axolotl 0.3 based on comments
* Rename requirements to requirement-xpu
* Add comments for paged_adamw_32bit
* change lora_r from 8 to 16
2024-04-16 14:47:45 +08:00
..
axolotl Refine axolotl 0.3.0 documents and links (#10764) 2024-04-16 14:47:45 +08:00
common Refactor bigdl.llm to ipex_llm (#24) 2024-03-22 15:41:21 +08:00
DPO Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
HF-PEFT Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
LoRA Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
QA-LoRA Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
QLoRA Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
ReLora Upgrade to python 3.11 (#10711) 2024-04-09 17:41:17 +08:00
README.md Add axolotl v0.3.0 with ipex-llm on Intel GPU (#10717) 2024-04-10 14:38:29 +08:00

Running LLM Finetuning using IPEX-LLM on Intel GPU

This folder contains examples of running different training mode with IPEX-LLM on Intel GPU:

  • LoRA: examples of running LoRA finetuning
  • QLoRA: examples of running QLoRA finetuning
  • QA-LoRA: examples of running QA-LoRA finetuning
  • ReLora: examples of running ReLora finetuning
  • DPO: examples of running DPO finetuning
  • common: common templates and utility classes in finetuning examples
  • HF-PEFT: run finetuning on Intel GPU using Hugging Face PEFT code without modification
  • axolotl: LLM finetuning on Intel GPU using axolotl without writing code

Troubleshooting

  • If you fail to finetune on multi cards because of following error message:

    RuntimeError: oneCCL: comm_selector.cpp:57 create_comm_impl: EXCEPTION: ze_data was not initialized
    

    Please try sudo apt install level-zero-dev to fix it.

  • Please raise the system open file limit using ulimit -n 1048576. Otherwise, there may exist error Too many open files.