From 492ed3fd410fdb95ec68f129f5c4191cc3a03652 Mon Sep 17 00:00:00 2001 From: Qiyuan Gong Date: Tue, 21 May 2024 15:49:15 +0800 Subject: [PATCH] Add verified models to GPU finetune README (#11088) * Add verified models to GPU finetune README --- python/llm/example/GPU/LLM-Finetuning/README.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/python/llm/example/GPU/LLM-Finetuning/README.md b/python/llm/example/GPU/LLM-Finetuning/README.md index 5ffd83dd..0c0f86ba 100644 --- a/python/llm/example/GPU/LLM-Finetuning/README.md +++ b/python/llm/example/GPU/LLM-Finetuning/README.md @@ -11,6 +11,15 @@ This folder contains examples of running different training mode with IPEX-LLM o - [HF-PEFT](HF-PEFT): run finetuning on Intel GPU using Hugging Face PEFT code without modification - [axolotl](axolotl): LLM finetuning on Intel GPU using axolotl without writing code +## Verified Models + +| Model | Finetune mode | Frameworks Support | +|------------|-----------------------------------------------------------------|-----------------------------------------------------------------| +| LLaMA 2/3 | [LoRA](LoRA), [QLoRA](QLoRA), [QA-LoRA](QA-LoRA), [ReLora](ReLora) | [HF-PEFT](HF-PEFT), [axolotl](axolotl) | +| Mistral | [LoRA](DPO), [QLoRA](DPO) | [DPO](DPO) | +| ChatGLM 3 | [QLoRA](QLoRA/alpaca-qlora#3-qlora-finetune) | HF-PEFT | +| Qwen-1.5 | [QLoRA](QLoRA/alpaca-qlora#3-qlora-finetune) | HF-PEFT | +| Baichuan2 | [QLoRA](QLoRA/alpaca-qlora#3-qlora-finetune) | HF-PEFT | ## Troubleshooting - If you fail to finetune on multi cards because of following error message: